Test Report: Docker_Linux_containerd_arm64 19364

                    
                      25094c99c11af6abe50820a6398a27b4b8dd70b0:2024-08-03:35633
                    
                

Test fail (2/336)

Order failed test Duration
38 TestAddons/serial/Volcano 199.93
309 TestStartStop/group/old-k8s-version/serial/SecondStart 379.53
x
+
TestAddons/serial/Volcano (199.93s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:905: volcano-admission stabilized in 51.527482ms
addons_test.go:897: volcano-scheduler stabilized in 51.613144ms
addons_test.go:913: volcano-controller stabilized in 51.654892ms
addons_test.go:919: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-scheduler-844f6db89b-4f6t9" [50b5ee31-a17e-4ca9-a9c7-228645351120] Running
addons_test.go:919: (dbg) TestAddons/serial/Volcano: app=volcano-scheduler healthy within 6.00378198s
addons_test.go:923: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-admission-5f7844f7bc-dr4dc" [6ed1fcc7-6a50-4797-9107-7dab6f693ce1] Running
addons_test.go:923: (dbg) TestAddons/serial/Volcano: app=volcano-admission healthy within 5.003804519s
addons_test.go:927: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-controllers-59cb4746db-v7g8t" [c7e73f5e-95c8-4147-9ccc-9f5d4f9f95c4] Running
addons_test.go:927: (dbg) TestAddons/serial/Volcano: app=volcano-controller healthy within 5.003615109s
addons_test.go:932: (dbg) Run:  kubectl --context addons-369401 delete -n volcano-system job volcano-admission-init
addons_test.go:938: (dbg) Run:  kubectl --context addons-369401 create -f testdata/vcjob.yaml
addons_test.go:946: (dbg) Run:  kubectl --context addons-369401 get vcjob -n my-volcano
addons_test.go:964: (dbg) TestAddons/serial/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:344: "test-job-nginx-0" [b91bbe01-f7fa-4377-882f-4f05a6f52bd7] Pending
helpers_test.go:344: "test-job-nginx-0" [b91bbe01-f7fa-4377-882f-4f05a6f52bd7] Pending: PodScheduled:Unschedulable (0/1 nodes are unavailable: 1 Insufficient cpu.)
helpers_test.go:329: TestAddons/serial/Volcano: WARNING: pod list for "my-volcano" "volcano.sh/job-name=test-job" returned: client rate limiter Wait returned an error: context deadline exceeded
addons_test.go:964: ***** TestAddons/serial/Volcano: pod "volcano.sh/job-name=test-job" failed to start within 3m0s: context deadline exceeded ****
addons_test.go:964: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-369401 -n addons-369401
addons_test.go:964: TestAddons/serial/Volcano: showing logs for failed pods as of 2024-08-03 22:56:05.230648499 +0000 UTC m=+448.547977299
addons_test.go:964: (dbg) Run:  kubectl --context addons-369401 describe po test-job-nginx-0 -n my-volcano
addons_test.go:964: (dbg) kubectl --context addons-369401 describe po test-job-nginx-0 -n my-volcano:
Name:             test-job-nginx-0
Namespace:        my-volcano
Priority:         0
Service Account:  default
Node:             <none>
Labels:           volcano.sh/job-name=test-job
volcano.sh/job-namespace=my-volcano
volcano.sh/queue-name=test
volcano.sh/task-index=0
volcano.sh/task-spec=nginx
Annotations:      scheduling.k8s.io/group-name: test-job-b0941f63-69fe-4ad0-a414-efeaaece75df
volcano.sh/job-name: test-job
volcano.sh/job-version: 0
volcano.sh/queue-name: test
volcano.sh/task-index: 0
volcano.sh/task-spec: nginx
volcano.sh/template-uid: test-job-nginx
Status:           Pending
IP:               
IPs:              <none>
Controlled By:    Job/test-job
Containers:
nginx:
Image:      nginx:latest
Port:       <none>
Host Port:  <none>
Command:
sleep
10m
Limits:
cpu:  1
Requests:
cpu:  1
Environment:
GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
PROJECT_ID:                      this_is_fake
GCP_PROJECT:                     this_is_fake
GCLOUD_PROJECT:                  this_is_fake
GOOGLE_CLOUD_PROJECT:            this_is_fake
CLOUDSDK_CORE_PROJECT:           this_is_fake
Mounts:
/google-app-creds.json from gcp-creds (ro)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-47hfg (ro)
Conditions:
Type           Status
PodScheduled   False 
Volumes:
kube-api-access-47hfg:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
ConfigMapOptional:       <nil>
DownwardAPI:             true
gcp-creds:
Type:          HostPath (bare host directory volume)
Path:          /var/lib/minikube/google_application_credentials.json
HostPathType:  File
QoS Class:         Burstable
Node-Selectors:    <none>
Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason            Age    From     Message
----     ------            ----   ----     -------
Warning  FailedScheduling  2m59s  volcano  0/1 nodes are unavailable: 1 Insufficient cpu.
addons_test.go:964: (dbg) Run:  kubectl --context addons-369401 logs test-job-nginx-0 -n my-volcano
addons_test.go:964: (dbg) kubectl --context addons-369401 logs test-job-nginx-0 -n my-volcano:
addons_test.go:965: failed waiting for test-local-path pod: volcano.sh/job-name=test-job within 3m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/serial/Volcano]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-369401
helpers_test.go:235: (dbg) docker inspect addons-369401:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "8e7e642a9091e6cffa26f03062e3dcc2e2e68df386bf18e1bd0c87a47227bdb8",
	        "Created": "2024-08-03T22:49:30.637353488Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1187229,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-08-03T22:49:30.771151629Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:2cd84ab2172023a68162f38a55db46353562cea41552fd8e8bdec97b31b2c495",
	        "ResolvConfPath": "/var/lib/docker/containers/8e7e642a9091e6cffa26f03062e3dcc2e2e68df386bf18e1bd0c87a47227bdb8/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/8e7e642a9091e6cffa26f03062e3dcc2e2e68df386bf18e1bd0c87a47227bdb8/hostname",
	        "HostsPath": "/var/lib/docker/containers/8e7e642a9091e6cffa26f03062e3dcc2e2e68df386bf18e1bd0c87a47227bdb8/hosts",
	        "LogPath": "/var/lib/docker/containers/8e7e642a9091e6cffa26f03062e3dcc2e2e68df386bf18e1bd0c87a47227bdb8/8e7e642a9091e6cffa26f03062e3dcc2e2e68df386bf18e1bd0c87a47227bdb8-json.log",
	        "Name": "/addons-369401",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-369401:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-369401",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/3c4bbc8c15afb79a07932e999db75dfebc23041ea2a41605c44a21592d3c0626-init/diff:/var/lib/docker/overlay2/d0e9013ff93972be10de1ce499c76c412f16d87933b328b08c9d90d7f75831bd/diff",
	                "MergedDir": "/var/lib/docker/overlay2/3c4bbc8c15afb79a07932e999db75dfebc23041ea2a41605c44a21592d3c0626/merged",
	                "UpperDir": "/var/lib/docker/overlay2/3c4bbc8c15afb79a07932e999db75dfebc23041ea2a41605c44a21592d3c0626/diff",
	                "WorkDir": "/var/lib/docker/overlay2/3c4bbc8c15afb79a07932e999db75dfebc23041ea2a41605c44a21592d3c0626/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-369401",
	                "Source": "/var/lib/docker/volumes/addons-369401/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-369401",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-369401",
	                "name.minikube.sigs.k8s.io": "addons-369401",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "b29f67e3daa0df7166f2c237ae13bee05d206b0039452dcda6860ead1ce0d1e5",
	            "SandboxKey": "/var/run/docker/netns/b29f67e3daa0",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34253"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34254"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34257"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34255"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34256"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-369401": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null,
	                    "NetworkID": "654070a0884360dfe73816e9b81ca5d3118e133a3e3131cc201df2a628a840bd",
	                    "EndpointID": "e58476682f5a9da22ff487f34f1b55ee672a7c08dd0bef1d7c3b53059c73f2aa",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-369401",
	                        "8e7e642a9091"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-369401 -n addons-369401
helpers_test.go:244: <<< TestAddons/serial/Volcano FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/serial/Volcano]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p addons-369401 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p addons-369401 logs -n 25: (1.541010676s)
helpers_test.go:252: TestAddons/serial/Volcano logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                 |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only              | download-only-024661   | jenkins | v1.33.1 | 03 Aug 24 22:48 UTC |                     |
	|         | -p download-only-024661              |                        |         |         |                     |                     |
	|         | --force --alsologtostderr            |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0         |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	| delete  | --all                                | minikube               | jenkins | v1.33.1 | 03 Aug 24 22:48 UTC | 03 Aug 24 22:48 UTC |
	| delete  | -p download-only-024661              | download-only-024661   | jenkins | v1.33.1 | 03 Aug 24 22:48 UTC | 03 Aug 24 22:48 UTC |
	| start   | -o=json --download-only              | download-only-430970   | jenkins | v1.33.1 | 03 Aug 24 22:48 UTC |                     |
	|         | -p download-only-430970              |                        |         |         |                     |                     |
	|         | --force --alsologtostderr            |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3         |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	| delete  | --all                                | minikube               | jenkins | v1.33.1 | 03 Aug 24 22:48 UTC | 03 Aug 24 22:48 UTC |
	| delete  | -p download-only-430970              | download-only-430970   | jenkins | v1.33.1 | 03 Aug 24 22:48 UTC | 03 Aug 24 22:48 UTC |
	| start   | -o=json --download-only              | download-only-921243   | jenkins | v1.33.1 | 03 Aug 24 22:48 UTC |                     |
	|         | -p download-only-921243              |                        |         |         |                     |                     |
	|         | --force --alsologtostderr            |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-rc.0    |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	| delete  | --all                                | minikube               | jenkins | v1.33.1 | 03 Aug 24 22:49 UTC | 03 Aug 24 22:49 UTC |
	| delete  | -p download-only-921243              | download-only-921243   | jenkins | v1.33.1 | 03 Aug 24 22:49 UTC | 03 Aug 24 22:49 UTC |
	| delete  | -p download-only-024661              | download-only-024661   | jenkins | v1.33.1 | 03 Aug 24 22:49 UTC | 03 Aug 24 22:49 UTC |
	| delete  | -p download-only-430970              | download-only-430970   | jenkins | v1.33.1 | 03 Aug 24 22:49 UTC | 03 Aug 24 22:49 UTC |
	| delete  | -p download-only-921243              | download-only-921243   | jenkins | v1.33.1 | 03 Aug 24 22:49 UTC | 03 Aug 24 22:49 UTC |
	| start   | --download-only -p                   | download-docker-815058 | jenkins | v1.33.1 | 03 Aug 24 22:49 UTC |                     |
	|         | download-docker-815058               |                        |         |         |                     |                     |
	|         | --alsologtostderr                    |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	| delete  | -p download-docker-815058            | download-docker-815058 | jenkins | v1.33.1 | 03 Aug 24 22:49 UTC | 03 Aug 24 22:49 UTC |
	| start   | --download-only -p                   | binary-mirror-761872   | jenkins | v1.33.1 | 03 Aug 24 22:49 UTC |                     |
	|         | binary-mirror-761872                 |                        |         |         |                     |                     |
	|         | --alsologtostderr                    |                        |         |         |                     |                     |
	|         | --binary-mirror                      |                        |         |         |                     |                     |
	|         | http://127.0.0.1:37059               |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	| delete  | -p binary-mirror-761872              | binary-mirror-761872   | jenkins | v1.33.1 | 03 Aug 24 22:49 UTC | 03 Aug 24 22:49 UTC |
	| addons  | enable dashboard -p                  | addons-369401          | jenkins | v1.33.1 | 03 Aug 24 22:49 UTC |                     |
	|         | addons-369401                        |                        |         |         |                     |                     |
	| addons  | disable dashboard -p                 | addons-369401          | jenkins | v1.33.1 | 03 Aug 24 22:49 UTC |                     |
	|         | addons-369401                        |                        |         |         |                     |                     |
	| start   | -p addons-369401 --wait=true         | addons-369401          | jenkins | v1.33.1 | 03 Aug 24 22:49 UTC | 03 Aug 24 22:52 UTC |
	|         | --memory=4000 --alsologtostderr      |                        |         |         |                     |                     |
	|         | --addons=registry                    |                        |         |         |                     |                     |
	|         | --addons=metrics-server              |                        |         |         |                     |                     |
	|         | --addons=volumesnapshots             |                        |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver         |                        |         |         |                     |                     |
	|         | --addons=gcp-auth                    |                        |         |         |                     |                     |
	|         | --addons=cloud-spanner               |                        |         |         |                     |                     |
	|         | --addons=inspektor-gadget            |                        |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher |                        |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin        |                        |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano       |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	|         | --addons=ingress                     |                        |         |         |                     |                     |
	|         | --addons=ingress-dns                 |                        |         |         |                     |                     |
	|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/03 22:49:06
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.22.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0803 22:49:06.283766 1186733 out.go:291] Setting OutFile to fd 1 ...
	I0803 22:49:06.283951 1186733 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 22:49:06.283963 1186733 out.go:304] Setting ErrFile to fd 2...
	I0803 22:49:06.283969 1186733 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 22:49:06.284228 1186733 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19364-1180294/.minikube/bin
	I0803 22:49:06.284701 1186733 out.go:298] Setting JSON to false
	I0803 22:49:06.285577 1186733 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":27092,"bootTime":1722698255,"procs":147,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0803 22:49:06.285651 1186733 start.go:139] virtualization:  
	I0803 22:49:06.288464 1186733 out.go:177] * [addons-369401] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	I0803 22:49:06.291521 1186733 out.go:177]   - MINIKUBE_LOCATION=19364
	I0803 22:49:06.291608 1186733 notify.go:220] Checking for updates...
	I0803 22:49:06.295698 1186733 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0803 22:49:06.297648 1186733 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19364-1180294/kubeconfig
	I0803 22:49:06.299296 1186733 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19364-1180294/.minikube
	I0803 22:49:06.301549 1186733 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0803 22:49:06.303853 1186733 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0803 22:49:06.305983 1186733 driver.go:392] Setting default libvirt URI to qemu:///system
	I0803 22:49:06.330190 1186733 docker.go:123] docker version: linux-27.1.1:Docker Engine - Community
	I0803 22:49:06.330304 1186733 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0803 22:49:06.385019 1186733 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:25 OomKillDisable:true NGoroutines:44 SystemTime:2024-08-03 22:49:06.374974862 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1066-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214896640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41 Expected:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1]] Warnings:<nil>}}
	I0803 22:49:06.385130 1186733 docker.go:307] overlay module found
	I0803 22:49:06.387700 1186733 out.go:177] * Using the docker driver based on user configuration
	I0803 22:49:06.389776 1186733 start.go:297] selected driver: docker
	I0803 22:49:06.389797 1186733 start.go:901] validating driver "docker" against <nil>
	I0803 22:49:06.389811 1186733 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0803 22:49:06.390480 1186733 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0803 22:49:06.446402 1186733 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:25 OomKillDisable:true NGoroutines:44 SystemTime:2024-08-03 22:49:06.437385622 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1066-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214896640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41 Expected:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1]] Warnings:<nil>}}
	I0803 22:49:06.446571 1186733 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0803 22:49:06.446805 1186733 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0803 22:49:06.449108 1186733 out.go:177] * Using Docker driver with root privileges
	I0803 22:49:06.451074 1186733 cni.go:84] Creating CNI manager for ""
	I0803 22:49:06.451094 1186733 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0803 22:49:06.451104 1186733 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0803 22:49:06.451201 1186733 start.go:340] cluster config:
	{Name:addons-369401 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:addons-369401 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHA
uthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0803 22:49:06.453383 1186733 out.go:177] * Starting "addons-369401" primary control-plane node in "addons-369401" cluster
	I0803 22:49:06.455922 1186733 cache.go:121] Beginning downloading kic base image for docker with containerd
	I0803 22:49:06.458339 1186733 out.go:177] * Pulling base image v0.0.44-1721902582-19326 ...
	I0803 22:49:06.460498 1186733 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime containerd
	I0803 22:49:06.460549 1186733 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19364-1180294/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-containerd-overlay2-arm64.tar.lz4
	I0803 22:49:06.460560 1186733 cache.go:56] Caching tarball of preloaded images
	I0803 22:49:06.460589 1186733 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 in local docker daemon
	I0803 22:49:06.460643 1186733 preload.go:172] Found /home/jenkins/minikube-integration/19364-1180294/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I0803 22:49:06.460653 1186733 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on containerd
	I0803 22:49:06.461084 1186733 profile.go:143] Saving config to /home/jenkins/minikube-integration/19364-1180294/.minikube/profiles/addons-369401/config.json ...
	I0803 22:49:06.461159 1186733 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19364-1180294/.minikube/profiles/addons-369401/config.json: {Name:mkdc13209cf0905da73f9bca6315642d7614d31e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0803 22:49:06.476718 1186733 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 to local cache
	I0803 22:49:06.476865 1186733 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 in local cache directory
	I0803 22:49:06.476892 1186733 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 in local cache directory, skipping pull
	I0803 22:49:06.476897 1186733 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 exists in cache, skipping pull
	I0803 22:49:06.476905 1186733 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 as a tarball
	I0803 22:49:06.476914 1186733 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 from local cache
	I0803 22:49:23.408679 1186733 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 from cached tarball
	I0803 22:49:23.408717 1186733 cache.go:194] Successfully downloaded all kic artifacts
	I0803 22:49:23.408773 1186733 start.go:360] acquireMachinesLock for addons-369401: {Name:mkaff7aa18be4b9ec6c999b00e084ce083f141a4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0803 22:49:23.408901 1186733 start.go:364] duration metric: took 105.707µs to acquireMachinesLock for "addons-369401"
	I0803 22:49:23.408944 1186733 start.go:93] Provisioning new machine with config: &{Name:addons-369401 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:addons-369401 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:fa
lse CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0803 22:49:23.409020 1186733 start.go:125] createHost starting for "" (driver="docker")
	I0803 22:49:23.411953 1186733 out.go:204] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I0803 22:49:23.412203 1186733 start.go:159] libmachine.API.Create for "addons-369401" (driver="docker")
	I0803 22:49:23.412238 1186733 client.go:168] LocalClient.Create starting
	I0803 22:49:23.412364 1186733 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19364-1180294/.minikube/certs/ca.pem
	I0803 22:49:23.808171 1186733 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19364-1180294/.minikube/certs/cert.pem
	I0803 22:49:24.083418 1186733 cli_runner.go:164] Run: docker network inspect addons-369401 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0803 22:49:24.099040 1186733 cli_runner.go:211] docker network inspect addons-369401 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0803 22:49:24.099133 1186733 network_create.go:284] running [docker network inspect addons-369401] to gather additional debugging logs...
	I0803 22:49:24.099155 1186733 cli_runner.go:164] Run: docker network inspect addons-369401
	W0803 22:49:24.114056 1186733 cli_runner.go:211] docker network inspect addons-369401 returned with exit code 1
	I0803 22:49:24.114090 1186733 network_create.go:287] error running [docker network inspect addons-369401]: docker network inspect addons-369401: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-369401 not found
	I0803 22:49:24.114103 1186733 network_create.go:289] output of [docker network inspect addons-369401]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-369401 not found
	
	** /stderr **
	I0803 22:49:24.114204 1186733 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0803 22:49:24.128670 1186733 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4000138690}
	I0803 22:49:24.128759 1186733 network_create.go:124] attempt to create docker network addons-369401 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0803 22:49:24.128823 1186733 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-369401 addons-369401
	I0803 22:49:24.199248 1186733 network_create.go:108] docker network addons-369401 192.168.49.0/24 created
	I0803 22:49:24.199281 1186733 kic.go:121] calculated static IP "192.168.49.2" for the "addons-369401" container
	I0803 22:49:24.199354 1186733 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0803 22:49:24.213433 1186733 cli_runner.go:164] Run: docker volume create addons-369401 --label name.minikube.sigs.k8s.io=addons-369401 --label created_by.minikube.sigs.k8s.io=true
	I0803 22:49:24.229658 1186733 oci.go:103] Successfully created a docker volume addons-369401
	I0803 22:49:24.229747 1186733 cli_runner.go:164] Run: docker run --rm --name addons-369401-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-369401 --entrypoint /usr/bin/test -v addons-369401:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 -d /var/lib
	I0803 22:49:26.343962 1186733 cli_runner.go:217] Completed: docker run --rm --name addons-369401-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-369401 --entrypoint /usr/bin/test -v addons-369401:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 -d /var/lib: (2.114165758s)
	I0803 22:49:26.343996 1186733 oci.go:107] Successfully prepared a docker volume addons-369401
	I0803 22:49:26.344013 1186733 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime containerd
	I0803 22:49:26.344032 1186733 kic.go:194] Starting extracting preloaded images to volume ...
	I0803 22:49:26.344128 1186733 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19364-1180294/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v addons-369401:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 -I lz4 -xf /preloaded.tar -C /extractDir
	I0803 22:49:30.574142 1186733 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19364-1180294/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v addons-369401:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 -I lz4 -xf /preloaded.tar -C /extractDir: (4.229971966s)
	I0803 22:49:30.574176 1186733 kic.go:203] duration metric: took 4.23014032s to extract preloaded images to volume ...
	W0803 22:49:30.574315 1186733 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0803 22:49:30.574461 1186733 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0803 22:49:30.624873 1186733 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-369401 --name addons-369401 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-369401 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-369401 --network addons-369401 --ip 192.168.49.2 --volume addons-369401:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7
	I0803 22:49:30.946834 1186733 cli_runner.go:164] Run: docker container inspect addons-369401 --format={{.State.Running}}
	I0803 22:49:30.965115 1186733 cli_runner.go:164] Run: docker container inspect addons-369401 --format={{.State.Status}}
	I0803 22:49:30.988080 1186733 cli_runner.go:164] Run: docker exec addons-369401 stat /var/lib/dpkg/alternatives/iptables
	I0803 22:49:31.058498 1186733 oci.go:144] the created container "addons-369401" has a running status.
	I0803 22:49:31.058531 1186733 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/19364-1180294/.minikube/machines/addons-369401/id_rsa...
	I0803 22:49:31.616971 1186733 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/19364-1180294/.minikube/machines/addons-369401/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0803 22:49:31.644774 1186733 cli_runner.go:164] Run: docker container inspect addons-369401 --format={{.State.Status}}
	I0803 22:49:31.672389 1186733 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0803 22:49:31.672410 1186733 kic_runner.go:114] Args: [docker exec --privileged addons-369401 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0803 22:49:31.743023 1186733 cli_runner.go:164] Run: docker container inspect addons-369401 --format={{.State.Status}}
	I0803 22:49:31.767861 1186733 machine.go:94] provisionDockerMachine start ...
	I0803 22:49:31.767968 1186733 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-369401
	I0803 22:49:31.807109 1186733 main.go:141] libmachine: Using SSH client type: native
	I0803 22:49:31.807369 1186733 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e2cd0] 0x3e5530 <nil>  [] 0s} 127.0.0.1 34253 <nil> <nil>}
	I0803 22:49:31.807377 1186733 main.go:141] libmachine: About to run SSH command:
	hostname
	I0803 22:49:31.947980 1186733 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-369401
	
	I0803 22:49:31.948045 1186733 ubuntu.go:169] provisioning hostname "addons-369401"
	I0803 22:49:31.948144 1186733 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-369401
	I0803 22:49:31.969010 1186733 main.go:141] libmachine: Using SSH client type: native
	I0803 22:49:31.969256 1186733 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e2cd0] 0x3e5530 <nil>  [] 0s} 127.0.0.1 34253 <nil> <nil>}
	I0803 22:49:31.969272 1186733 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-369401 && echo "addons-369401" | sudo tee /etc/hostname
	I0803 22:49:32.124338 1186733 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-369401
	
	I0803 22:49:32.124416 1186733 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-369401
	I0803 22:49:32.141586 1186733 main.go:141] libmachine: Using SSH client type: native
	I0803 22:49:32.141868 1186733 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e2cd0] 0x3e5530 <nil>  [] 0s} 127.0.0.1 34253 <nil> <nil>}
	I0803 22:49:32.141890 1186733 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-369401' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-369401/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-369401' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0803 22:49:32.272615 1186733 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0803 22:49:32.272641 1186733 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19364-1180294/.minikube CaCertPath:/home/jenkins/minikube-integration/19364-1180294/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19364-1180294/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19364-1180294/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19364-1180294/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19364-1180294/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19364-1180294/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19364-1180294/.minikube}
	I0803 22:49:32.272663 1186733 ubuntu.go:177] setting up certificates
	I0803 22:49:32.272676 1186733 provision.go:84] configureAuth start
	I0803 22:49:32.272769 1186733 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-369401
	I0803 22:49:32.296233 1186733 provision.go:143] copyHostCerts
	I0803 22:49:32.296314 1186733 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19364-1180294/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19364-1180294/.minikube/ca.pem (1078 bytes)
	I0803 22:49:32.296443 1186733 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19364-1180294/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19364-1180294/.minikube/cert.pem (1123 bytes)
	I0803 22:49:32.296502 1186733 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19364-1180294/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19364-1180294/.minikube/key.pem (1675 bytes)
	I0803 22:49:32.296547 1186733 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19364-1180294/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19364-1180294/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19364-1180294/.minikube/certs/ca-key.pem org=jenkins.addons-369401 san=[127.0.0.1 192.168.49.2 addons-369401 localhost minikube]
	I0803 22:49:32.480481 1186733 provision.go:177] copyRemoteCerts
	I0803 22:49:32.480574 1186733 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0803 22:49:32.480642 1186733 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-369401
	I0803 22:49:32.496436 1186733 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34253 SSHKeyPath:/home/jenkins/minikube-integration/19364-1180294/.minikube/machines/addons-369401/id_rsa Username:docker}
	I0803 22:49:32.589473 1186733 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-1180294/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0803 22:49:32.613980 1186733 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-1180294/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0803 22:49:32.638249 1186733 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-1180294/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0803 22:49:32.661926 1186733 provision.go:87] duration metric: took 389.236271ms to configureAuth
	I0803 22:49:32.661953 1186733 ubuntu.go:193] setting minikube options for container-runtime
	I0803 22:49:32.662139 1186733 config.go:182] Loaded profile config "addons-369401": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.30.3
	I0803 22:49:32.662147 1186733 machine.go:97] duration metric: took 894.267859ms to provisionDockerMachine
	I0803 22:49:32.662153 1186733 client.go:171] duration metric: took 9.249904524s to LocalClient.Create
	I0803 22:49:32.662175 1186733 start.go:167] duration metric: took 9.249974875s to libmachine.API.Create "addons-369401"
	I0803 22:49:32.662183 1186733 start.go:293] postStartSetup for "addons-369401" (driver="docker")
	I0803 22:49:32.662192 1186733 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0803 22:49:32.662247 1186733 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0803 22:49:32.662289 1186733 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-369401
	I0803 22:49:32.678162 1186733 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34253 SSHKeyPath:/home/jenkins/minikube-integration/19364-1180294/.minikube/machines/addons-369401/id_rsa Username:docker}
	I0803 22:49:32.773946 1186733 ssh_runner.go:195] Run: cat /etc/os-release
	I0803 22:49:32.777008 1186733 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0803 22:49:32.777054 1186733 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0803 22:49:32.777067 1186733 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0803 22:49:32.777074 1186733 info.go:137] Remote host: Ubuntu 22.04.4 LTS
	I0803 22:49:32.777085 1186733 filesync.go:126] Scanning /home/jenkins/minikube-integration/19364-1180294/.minikube/addons for local assets ...
	I0803 22:49:32.777151 1186733 filesync.go:126] Scanning /home/jenkins/minikube-integration/19364-1180294/.minikube/files for local assets ...
	I0803 22:49:32.777172 1186733 start.go:296] duration metric: took 114.984336ms for postStartSetup
	I0803 22:49:32.777480 1186733 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-369401
	I0803 22:49:32.793688 1186733 profile.go:143] Saving config to /home/jenkins/minikube-integration/19364-1180294/.minikube/profiles/addons-369401/config.json ...
	I0803 22:49:32.793985 1186733 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0803 22:49:32.794074 1186733 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-369401
	I0803 22:49:32.810354 1186733 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34253 SSHKeyPath:/home/jenkins/minikube-integration/19364-1180294/.minikube/machines/addons-369401/id_rsa Username:docker}
	I0803 22:49:32.901745 1186733 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0803 22:49:32.906468 1186733 start.go:128] duration metric: took 9.49743029s to createHost
	I0803 22:49:32.906496 1186733 start.go:83] releasing machines lock for "addons-369401", held for 9.497580864s
	I0803 22:49:32.906577 1186733 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-369401
	I0803 22:49:32.923445 1186733 ssh_runner.go:195] Run: cat /version.json
	I0803 22:49:32.923469 1186733 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0803 22:49:32.923501 1186733 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-369401
	I0803 22:49:32.923536 1186733 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-369401
	I0803 22:49:32.949919 1186733 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34253 SSHKeyPath:/home/jenkins/minikube-integration/19364-1180294/.minikube/machines/addons-369401/id_rsa Username:docker}
	I0803 22:49:32.962334 1186733 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34253 SSHKeyPath:/home/jenkins/minikube-integration/19364-1180294/.minikube/machines/addons-369401/id_rsa Username:docker}
	I0803 22:49:33.044994 1186733 ssh_runner.go:195] Run: systemctl --version
	I0803 22:49:33.178267 1186733 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0803 22:49:33.182885 1186733 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0803 22:49:33.208459 1186733 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0803 22:49:33.208588 1186733 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0803 22:49:33.237742 1186733 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0803 22:49:33.237817 1186733 start.go:495] detecting cgroup driver to use...
	I0803 22:49:33.237865 1186733 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0803 22:49:33.237948 1186733 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0803 22:49:33.250625 1186733 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0803 22:49:33.262543 1186733 docker.go:217] disabling cri-docker service (if available) ...
	I0803 22:49:33.262689 1186733 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0803 22:49:33.277057 1186733 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0803 22:49:33.292279 1186733 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0803 22:49:33.382230 1186733 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0803 22:49:33.472420 1186733 docker.go:233] disabling docker service ...
	I0803 22:49:33.472499 1186733 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0803 22:49:33.492632 1186733 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0803 22:49:33.505358 1186733 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0803 22:49:33.598483 1186733 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0803 22:49:33.689240 1186733 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0803 22:49:33.700905 1186733 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0803 22:49:33.717694 1186733 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0803 22:49:33.729515 1186733 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0803 22:49:33.740035 1186733 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0803 22:49:33.740134 1186733 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0803 22:49:33.750759 1186733 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0803 22:49:33.761188 1186733 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0803 22:49:33.771771 1186733 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0803 22:49:33.782231 1186733 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0803 22:49:33.791477 1186733 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0803 22:49:33.801562 1186733 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0803 22:49:33.811162 1186733 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0803 22:49:33.820863 1186733 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0803 22:49:33.829350 1186733 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0803 22:49:33.837836 1186733 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0803 22:49:33.928300 1186733 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0803 22:49:34.074747 1186733 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I0803 22:49:34.074840 1186733 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0803 22:49:34.078748 1186733 start.go:563] Will wait 60s for crictl version
	I0803 22:49:34.078818 1186733 ssh_runner.go:195] Run: which crictl
	I0803 22:49:34.082542 1186733 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0803 22:49:34.126961 1186733 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.7.19
	RuntimeApiVersion:  v1
	I0803 22:49:34.127114 1186733 ssh_runner.go:195] Run: containerd --version
	I0803 22:49:34.150074 1186733 ssh_runner.go:195] Run: containerd --version
	I0803 22:49:34.174735 1186733 out.go:177] * Preparing Kubernetes v1.30.3 on containerd 1.7.19 ...
	I0803 22:49:34.176878 1186733 cli_runner.go:164] Run: docker network inspect addons-369401 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0803 22:49:34.192811 1186733 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0803 22:49:34.196408 1186733 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0803 22:49:34.207323 1186733 kubeadm.go:883] updating cluster {Name:addons-369401 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:addons-369401 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Cus
tomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0803 22:49:34.207452 1186733 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime containerd
	I0803 22:49:34.207516 1186733 ssh_runner.go:195] Run: sudo crictl images --output json
	I0803 22:49:34.244788 1186733 containerd.go:627] all images are preloaded for containerd runtime.
	I0803 22:49:34.244814 1186733 containerd.go:534] Images already preloaded, skipping extraction
	I0803 22:49:34.244877 1186733 ssh_runner.go:195] Run: sudo crictl images --output json
	I0803 22:49:34.280117 1186733 containerd.go:627] all images are preloaded for containerd runtime.
	I0803 22:49:34.280140 1186733 cache_images.go:84] Images are preloaded, skipping loading
	I0803 22:49:34.280150 1186733 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.30.3 containerd true true} ...
	I0803 22:49:34.280251 1186733 kubeadm.go:946] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-369401 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:addons-369401 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0803 22:49:34.280321 1186733 ssh_runner.go:195] Run: sudo crictl info
	I0803 22:49:34.318448 1186733 cni.go:84] Creating CNI manager for ""
	I0803 22:49:34.318475 1186733 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0803 22:49:34.318486 1186733 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0803 22:49:34.318511 1186733 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-369401 NodeName:addons-369401 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0803 22:49:34.318658 1186733 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "addons-369401"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0803 22:49:34.318732 1186733 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0803 22:49:34.327613 1186733 binaries.go:44] Found k8s binaries, skipping transfer
	I0803 22:49:34.327690 1186733 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0803 22:49:34.336241 1186733 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0803 22:49:34.354769 1186733 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0803 22:49:34.373092 1186733 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2167 bytes)
	I0803 22:49:34.391339 1186733 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0803 22:49:34.394962 1186733 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0803 22:49:34.405421 1186733 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0803 22:49:34.492928 1186733 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0803 22:49:34.508357 1186733 certs.go:68] Setting up /home/jenkins/minikube-integration/19364-1180294/.minikube/profiles/addons-369401 for IP: 192.168.49.2
	I0803 22:49:34.508382 1186733 certs.go:194] generating shared ca certs ...
	I0803 22:49:34.508399 1186733 certs.go:226] acquiring lock for ca certs: {Name:mk245d61d460943c9f9c4518cc1e3561b25bafd3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0803 22:49:34.508619 1186733 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19364-1180294/.minikube/ca.key
	I0803 22:49:34.874122 1186733 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19364-1180294/.minikube/ca.crt ...
	I0803 22:49:34.874155 1186733 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19364-1180294/.minikube/ca.crt: {Name:mka866e14a85e56a5f11b5904ef01a053d85b572 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0803 22:49:34.874379 1186733 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19364-1180294/.minikube/ca.key ...
	I0803 22:49:34.874394 1186733 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19364-1180294/.minikube/ca.key: {Name:mk7481c7cf66d2f5ae3e1886108ece95028a1cc3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0803 22:49:34.874498 1186733 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19364-1180294/.minikube/proxy-client-ca.key
	I0803 22:49:35.707727 1186733 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19364-1180294/.minikube/proxy-client-ca.crt ...
	I0803 22:49:35.707765 1186733 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19364-1180294/.minikube/proxy-client-ca.crt: {Name:mk392210570656df61447c2fea9b41886cc30f1b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0803 22:49:35.707962 1186733 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19364-1180294/.minikube/proxy-client-ca.key ...
	I0803 22:49:35.707976 1186733 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19364-1180294/.minikube/proxy-client-ca.key: {Name:mkd4f09286ff85a8788a2ee603ca0b33928b0373 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0803 22:49:35.708068 1186733 certs.go:256] generating profile certs ...
	I0803 22:49:35.708131 1186733 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19364-1180294/.minikube/profiles/addons-369401/client.key
	I0803 22:49:35.708150 1186733 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19364-1180294/.minikube/profiles/addons-369401/client.crt with IP's: []
	I0803 22:49:36.174339 1186733 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19364-1180294/.minikube/profiles/addons-369401/client.crt ...
	I0803 22:49:36.174371 1186733 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19364-1180294/.minikube/profiles/addons-369401/client.crt: {Name:mk84c40dd16d37bc36e0c7dcba967e1a7428d298 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0803 22:49:36.175075 1186733 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19364-1180294/.minikube/profiles/addons-369401/client.key ...
	I0803 22:49:36.175093 1186733 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19364-1180294/.minikube/profiles/addons-369401/client.key: {Name:mk9509294a5cd78d2d6b52ee0abe1e4ac06d4e8d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0803 22:49:36.175193 1186733 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19364-1180294/.minikube/profiles/addons-369401/apiserver.key.cbe8a61c
	I0803 22:49:36.175214 1186733 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19364-1180294/.minikube/profiles/addons-369401/apiserver.crt.cbe8a61c with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I0803 22:49:36.572665 1186733 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19364-1180294/.minikube/profiles/addons-369401/apiserver.crt.cbe8a61c ...
	I0803 22:49:36.572697 1186733 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19364-1180294/.minikube/profiles/addons-369401/apiserver.crt.cbe8a61c: {Name:mk192bb1ef462429b4e7eb2bf233c8c73479894c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0803 22:49:36.573584 1186733 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19364-1180294/.minikube/profiles/addons-369401/apiserver.key.cbe8a61c ...
	I0803 22:49:36.573607 1186733 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19364-1180294/.minikube/profiles/addons-369401/apiserver.key.cbe8a61c: {Name:mk67ddbade3e80cacc3f89bcfb336c2fa17a1530 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0803 22:49:36.574199 1186733 certs.go:381] copying /home/jenkins/minikube-integration/19364-1180294/.minikube/profiles/addons-369401/apiserver.crt.cbe8a61c -> /home/jenkins/minikube-integration/19364-1180294/.minikube/profiles/addons-369401/apiserver.crt
	I0803 22:49:36.574299 1186733 certs.go:385] copying /home/jenkins/minikube-integration/19364-1180294/.minikube/profiles/addons-369401/apiserver.key.cbe8a61c -> /home/jenkins/minikube-integration/19364-1180294/.minikube/profiles/addons-369401/apiserver.key
	I0803 22:49:36.574360 1186733 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19364-1180294/.minikube/profiles/addons-369401/proxy-client.key
	I0803 22:49:36.574386 1186733 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19364-1180294/.minikube/profiles/addons-369401/proxy-client.crt with IP's: []
	I0803 22:49:37.181629 1186733 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19364-1180294/.minikube/profiles/addons-369401/proxy-client.crt ...
	I0803 22:49:37.181663 1186733 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19364-1180294/.minikube/profiles/addons-369401/proxy-client.crt: {Name:mkbf6806b7ad42dc44da93386c86a6cf77e112b0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0803 22:49:37.182323 1186733 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19364-1180294/.minikube/profiles/addons-369401/proxy-client.key ...
	I0803 22:49:37.182344 1186733 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19364-1180294/.minikube/profiles/addons-369401/proxy-client.key: {Name:mk50101d545196d22ee6770943448e8281587d0f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0803 22:49:37.182557 1186733 certs.go:484] found cert: /home/jenkins/minikube-integration/19364-1180294/.minikube/certs/ca-key.pem (1679 bytes)
	I0803 22:49:37.182604 1186733 certs.go:484] found cert: /home/jenkins/minikube-integration/19364-1180294/.minikube/certs/ca.pem (1078 bytes)
	I0803 22:49:37.182628 1186733 certs.go:484] found cert: /home/jenkins/minikube-integration/19364-1180294/.minikube/certs/cert.pem (1123 bytes)
	I0803 22:49:37.182656 1186733 certs.go:484] found cert: /home/jenkins/minikube-integration/19364-1180294/.minikube/certs/key.pem (1675 bytes)
	I0803 22:49:37.183467 1186733 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-1180294/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0803 22:49:37.210386 1186733 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-1180294/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0803 22:49:37.235698 1186733 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-1180294/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0803 22:49:37.261424 1186733 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-1180294/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0803 22:49:37.287298 1186733 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-1180294/.minikube/profiles/addons-369401/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0803 22:49:37.313968 1186733 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-1180294/.minikube/profiles/addons-369401/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0803 22:49:37.338418 1186733 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-1180294/.minikube/profiles/addons-369401/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0803 22:49:37.362877 1186733 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-1180294/.minikube/profiles/addons-369401/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0803 22:49:37.386843 1186733 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-1180294/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0803 22:49:37.411003 1186733 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0803 22:49:37.428398 1186733 ssh_runner.go:195] Run: openssl version
	I0803 22:49:37.433891 1186733 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0803 22:49:37.443089 1186733 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0803 22:49:37.446480 1186733 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug  3 22:49 /usr/share/ca-certificates/minikubeCA.pem
	I0803 22:49:37.446543 1186733 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0803 22:49:37.453048 1186733 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0803 22:49:37.462266 1186733 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0803 22:49:37.465529 1186733 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0803 22:49:37.465621 1186733 kubeadm.go:392] StartCluster: {Name:addons-369401 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:addons-369401 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Custom
QemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0803 22:49:37.465710 1186733 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0803 22:49:37.465767 1186733 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0803 22:49:37.502583 1186733 cri.go:89] found id: ""
	I0803 22:49:37.502701 1186733 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0803 22:49:37.511639 1186733 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0803 22:49:37.520440 1186733 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0803 22:49:37.520542 1186733 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0803 22:49:37.533211 1186733 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0803 22:49:37.533231 1186733 kubeadm.go:157] found existing configuration files:
	
	I0803 22:49:37.533294 1186733 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0803 22:49:37.543598 1186733 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0803 22:49:37.543715 1186733 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0803 22:49:37.553004 1186733 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0803 22:49:37.563294 1186733 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0803 22:49:37.563444 1186733 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0803 22:49:37.572476 1186733 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0803 22:49:37.582732 1186733 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0803 22:49:37.582847 1186733 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0803 22:49:37.591910 1186733 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0803 22:49:37.605186 1186733 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0803 22:49:37.605281 1186733 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0803 22:49:37.613849 1186733 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0803 22:49:37.657702 1186733 kubeadm.go:310] [init] Using Kubernetes version: v1.30.3
	I0803 22:49:37.657955 1186733 kubeadm.go:310] [preflight] Running pre-flight checks
	I0803 22:49:37.696765 1186733 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0803 22:49:37.696867 1186733 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1066-aws
	I0803 22:49:37.696926 1186733 kubeadm.go:310] OS: Linux
	I0803 22:49:37.696994 1186733 kubeadm.go:310] CGROUPS_CPU: enabled
	I0803 22:49:37.697072 1186733 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I0803 22:49:37.697147 1186733 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0803 22:49:37.697222 1186733 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0803 22:49:37.697286 1186733 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0803 22:49:37.697351 1186733 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0803 22:49:37.697455 1186733 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0803 22:49:37.697551 1186733 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0803 22:49:37.697626 1186733 kubeadm.go:310] CGROUPS_BLKIO: enabled
	I0803 22:49:37.767476 1186733 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0803 22:49:37.767617 1186733 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0803 22:49:37.767737 1186733 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0803 22:49:37.992768 1186733 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0803 22:49:37.995707 1186733 out.go:204]   - Generating certificates and keys ...
	I0803 22:49:37.995818 1186733 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0803 22:49:37.995909 1186733 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0803 22:49:38.967848 1186733 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0803 22:49:39.143831 1186733 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0803 22:49:39.761308 1186733 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0803 22:49:40.159194 1186733 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0803 22:49:40.842919 1186733 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0803 22:49:40.843204 1186733 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-369401 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0803 22:49:41.906828 1186733 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0803 22:49:41.906967 1186733 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-369401 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0803 22:49:42.289940 1186733 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0803 22:49:42.686528 1186733 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0803 22:49:43.268046 1186733 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0803 22:49:43.268296 1186733 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0803 22:49:43.605010 1186733 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0803 22:49:43.921300 1186733 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0803 22:49:44.352107 1186733 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0803 22:49:44.518709 1186733 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0803 22:49:44.747280 1186733 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0803 22:49:44.748043 1186733 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0803 22:49:44.751115 1186733 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0803 22:49:44.754015 1186733 out.go:204]   - Booting up control plane ...
	I0803 22:49:44.754114 1186733 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0803 22:49:44.754191 1186733 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0803 22:49:44.754571 1186733 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0803 22:49:44.765346 1186733 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0803 22:49:44.766392 1186733 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0803 22:49:44.766651 1186733 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0803 22:49:44.872330 1186733 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0803 22:49:44.872431 1186733 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0803 22:49:45.873152 1186733 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.000921045s
	I0803 22:49:45.873237 1186733 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0803 22:49:51.874704 1186733 kubeadm.go:310] [api-check] The API server is healthy after 6.00135645s
	I0803 22:49:51.893747 1186733 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0803 22:49:51.907290 1186733 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0803 22:49:51.928125 1186733 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0803 22:49:51.928494 1186733 kubeadm.go:310] [mark-control-plane] Marking the node addons-369401 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0803 22:49:51.939335 1186733 kubeadm.go:310] [bootstrap-token] Using token: dcg8a2.hueiv52o1vkepxy1
	I0803 22:49:51.941172 1186733 out.go:204]   - Configuring RBAC rules ...
	I0803 22:49:51.941294 1186733 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0803 22:49:51.946562 1186733 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0803 22:49:51.956160 1186733 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0803 22:49:51.960982 1186733 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0803 22:49:51.964803 1186733 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0803 22:49:51.971366 1186733 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0803 22:49:52.283047 1186733 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0803 22:49:52.735369 1186733 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0803 22:49:53.283145 1186733 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0803 22:49:53.284184 1186733 kubeadm.go:310] 
	I0803 22:49:53.284263 1186733 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0803 22:49:53.284269 1186733 kubeadm.go:310] 
	I0803 22:49:53.284343 1186733 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0803 22:49:53.284354 1186733 kubeadm.go:310] 
	I0803 22:49:53.284379 1186733 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0803 22:49:53.284436 1186733 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0803 22:49:53.284485 1186733 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0803 22:49:53.284489 1186733 kubeadm.go:310] 
	I0803 22:49:53.284540 1186733 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0803 22:49:53.284545 1186733 kubeadm.go:310] 
	I0803 22:49:53.284590 1186733 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0803 22:49:53.284594 1186733 kubeadm.go:310] 
	I0803 22:49:53.284646 1186733 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0803 22:49:53.284718 1186733 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0803 22:49:53.284810 1186733 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0803 22:49:53.284816 1186733 kubeadm.go:310] 
	I0803 22:49:53.284919 1186733 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0803 22:49:53.285020 1186733 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0803 22:49:53.285032 1186733 kubeadm.go:310] 
	I0803 22:49:53.285117 1186733 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token dcg8a2.hueiv52o1vkepxy1 \
	I0803 22:49:53.285238 1186733 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:76002d6e91bd9d7edd16ea59f544a29ae07d2b085d0b347df537b7ba2239aaea \
	I0803 22:49:53.285270 1186733 kubeadm.go:310] 	--control-plane 
	I0803 22:49:53.285286 1186733 kubeadm.go:310] 
	I0803 22:49:53.285382 1186733 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0803 22:49:53.285388 1186733 kubeadm.go:310] 
	I0803 22:49:53.285467 1186733 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token dcg8a2.hueiv52o1vkepxy1 \
	I0803 22:49:53.285571 1186733 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:76002d6e91bd9d7edd16ea59f544a29ae07d2b085d0b347df537b7ba2239aaea 
	I0803 22:49:53.289361 1186733 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1066-aws\n", err: exit status 1
	I0803 22:49:53.289494 1186733 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0803 22:49:53.289516 1186733 cni.go:84] Creating CNI manager for ""
	I0803 22:49:53.289524 1186733 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0803 22:49:53.291670 1186733 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0803 22:49:53.294201 1186733 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0803 22:49:53.297892 1186733 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.3/kubectl ...
	I0803 22:49:53.297913 1186733 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0803 22:49:53.315193 1186733 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0803 22:49:53.571158 1186733 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0803 22:49:53.571291 1186733 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-369401 minikube.k8s.io/updated_at=2024_08_03T22_49_53_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=b1de62d5257af3586cb63b8c779e46d9f9bc0082 minikube.k8s.io/name=addons-369401 minikube.k8s.io/primary=true
	I0803 22:49:53.571292 1186733 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0803 22:49:53.776241 1186733 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0803 22:49:53.776302 1186733 ops.go:34] apiserver oom_adj: -16
	I0803 22:49:54.276580 1186733 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0803 22:49:54.776979 1186733 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0803 22:49:55.277351 1186733 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0803 22:49:55.776913 1186733 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0803 22:49:56.277264 1186733 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0803 22:49:56.777086 1186733 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0803 22:49:57.276387 1186733 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0803 22:49:57.777126 1186733 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0803 22:49:58.276404 1186733 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0803 22:49:58.776326 1186733 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0803 22:49:59.276610 1186733 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0803 22:49:59.777171 1186733 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0803 22:50:00.277292 1186733 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0803 22:50:00.777267 1186733 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0803 22:50:01.277364 1186733 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0803 22:50:01.776378 1186733 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0803 22:50:02.277298 1186733 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0803 22:50:02.777312 1186733 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0803 22:50:03.276354 1186733 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0803 22:50:03.776876 1186733 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0803 22:50:04.277265 1186733 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0803 22:50:04.777251 1186733 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0803 22:50:05.276921 1186733 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0803 22:50:05.400574 1186733 kubeadm.go:1113] duration metric: took 11.829349994s to wait for elevateKubeSystemPrivileges
	I0803 22:50:05.400600 1186733 kubeadm.go:394] duration metric: took 27.93498401s to StartCluster
	I0803 22:50:05.400618 1186733 settings.go:142] acquiring lock: {Name:mk6781ca2b0427afb2b67408884ede06d33d8dbf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0803 22:50:05.401429 1186733 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19364-1180294/kubeconfig
	I0803 22:50:05.401823 1186733 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19364-1180294/kubeconfig: {Name:mk7ac442c13ee76103bb330a149278eea8a7c99f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0803 22:50:05.402024 1186733 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0803 22:50:05.402050 1186733 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0803 22:50:05.402299 1186733 config.go:182] Loaded profile config "addons-369401": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.30.3
	I0803 22:50:05.402342 1186733 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0803 22:50:05.402421 1186733 addons.go:69] Setting yakd=true in profile "addons-369401"
	I0803 22:50:05.402448 1186733 addons.go:234] Setting addon yakd=true in "addons-369401"
	I0803 22:50:05.402473 1186733 host.go:66] Checking if "addons-369401" exists ...
	I0803 22:50:05.402936 1186733 cli_runner.go:164] Run: docker container inspect addons-369401 --format={{.State.Status}}
	I0803 22:50:05.403244 1186733 addons.go:69] Setting inspektor-gadget=true in profile "addons-369401"
	I0803 22:50:05.403271 1186733 addons.go:234] Setting addon inspektor-gadget=true in "addons-369401"
	I0803 22:50:05.403299 1186733 host.go:66] Checking if "addons-369401" exists ...
	I0803 22:50:05.403726 1186733 cli_runner.go:164] Run: docker container inspect addons-369401 --format={{.State.Status}}
	I0803 22:50:05.404200 1186733 addons.go:69] Setting cloud-spanner=true in profile "addons-369401"
	I0803 22:50:05.404235 1186733 addons.go:234] Setting addon cloud-spanner=true in "addons-369401"
	I0803 22:50:05.404263 1186733 host.go:66] Checking if "addons-369401" exists ...
	I0803 22:50:05.404673 1186733 cli_runner.go:164] Run: docker container inspect addons-369401 --format={{.State.Status}}
	I0803 22:50:05.406018 1186733 addons.go:69] Setting metrics-server=true in profile "addons-369401"
	I0803 22:50:05.406290 1186733 addons.go:234] Setting addon metrics-server=true in "addons-369401"
	I0803 22:50:05.406332 1186733 host.go:66] Checking if "addons-369401" exists ...
	I0803 22:50:05.406744 1186733 cli_runner.go:164] Run: docker container inspect addons-369401 --format={{.State.Status}}
	I0803 22:50:05.406181 1186733 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-369401"
	I0803 22:50:05.410746 1186733 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-369401"
	I0803 22:50:05.410834 1186733 host.go:66] Checking if "addons-369401" exists ...
	I0803 22:50:05.411449 1186733 cli_runner.go:164] Run: docker container inspect addons-369401 --format={{.State.Status}}
	I0803 22:50:05.414926 1186733 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-369401"
	I0803 22:50:05.415000 1186733 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-369401"
	I0803 22:50:05.415034 1186733 host.go:66] Checking if "addons-369401" exists ...
	I0803 22:50:05.415557 1186733 cli_runner.go:164] Run: docker container inspect addons-369401 --format={{.State.Status}}
	I0803 22:50:05.421295 1186733 addons.go:69] Setting default-storageclass=true in profile "addons-369401"
	I0803 22:50:05.421359 1186733 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-369401"
	I0803 22:50:05.421820 1186733 cli_runner.go:164] Run: docker container inspect addons-369401 --format={{.State.Status}}
	I0803 22:50:05.406193 1186733 addons.go:69] Setting registry=true in profile "addons-369401"
	I0803 22:50:05.432411 1186733 addons.go:234] Setting addon registry=true in "addons-369401"
	I0803 22:50:05.432483 1186733 host.go:66] Checking if "addons-369401" exists ...
	I0803 22:50:05.432990 1186733 cli_runner.go:164] Run: docker container inspect addons-369401 --format={{.State.Status}}
	I0803 22:50:05.406198 1186733 addons.go:69] Setting storage-provisioner=true in profile "addons-369401"
	I0803 22:50:05.436807 1186733 addons.go:234] Setting addon storage-provisioner=true in "addons-369401"
	I0803 22:50:05.436855 1186733 host.go:66] Checking if "addons-369401" exists ...
	I0803 22:50:05.437335 1186733 addons.go:69] Setting gcp-auth=true in profile "addons-369401"
	I0803 22:50:05.437369 1186733 mustload.go:65] Loading cluster: addons-369401
	I0803 22:50:05.437515 1186733 config.go:182] Loaded profile config "addons-369401": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.30.3
	I0803 22:50:05.437724 1186733 cli_runner.go:164] Run: docker container inspect addons-369401 --format={{.State.Status}}
	I0803 22:50:05.438160 1186733 addons.go:69] Setting ingress=true in profile "addons-369401"
	I0803 22:50:05.438188 1186733 addons.go:234] Setting addon ingress=true in "addons-369401"
	I0803 22:50:05.438229 1186733 host.go:66] Checking if "addons-369401" exists ...
	I0803 22:50:05.438609 1186733 cli_runner.go:164] Run: docker container inspect addons-369401 --format={{.State.Status}}
	I0803 22:50:05.406202 1186733 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-369401"
	I0803 22:50:05.406206 1186733 addons.go:69] Setting volcano=true in profile "addons-369401"
	I0803 22:50:05.439701 1186733 addons.go:234] Setting addon volcano=true in "addons-369401"
	I0803 22:50:05.406210 1186733 addons.go:69] Setting volumesnapshots=true in profile "addons-369401"
	I0803 22:50:05.451960 1186733 addons.go:234] Setting addon volumesnapshots=true in "addons-369401"
	I0803 22:50:05.451981 1186733 host.go:66] Checking if "addons-369401" exists ...
	I0803 22:50:05.452405 1186733 cli_runner.go:164] Run: docker container inspect addons-369401 --format={{.State.Status}}
	I0803 22:50:05.406267 1186733 out.go:177] * Verifying Kubernetes components...
	I0803 22:50:05.471543 1186733 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0803 22:50:05.451798 1186733 addons.go:69] Setting ingress-dns=true in profile "addons-369401"
	I0803 22:50:05.471961 1186733 addons.go:234] Setting addon ingress-dns=true in "addons-369401"
	I0803 22:50:05.472018 1186733 host.go:66] Checking if "addons-369401" exists ...
	I0803 22:50:05.472479 1186733 cli_runner.go:164] Run: docker container inspect addons-369401 --format={{.State.Status}}
	I0803 22:50:05.475196 1186733 cli_runner.go:164] Run: docker container inspect addons-369401 --format={{.State.Status}}
	I0803 22:50:05.451892 1186733 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-369401"
	I0803 22:50:05.490756 1186733 cli_runner.go:164] Run: docker container inspect addons-369401 --format={{.State.Status}}
	I0803 22:50:05.451935 1186733 host.go:66] Checking if "addons-369401" exists ...
	I0803 22:50:05.505271 1186733 cli_runner.go:164] Run: docker container inspect addons-369401 --format={{.State.Status}}
	I0803 22:50:05.525964 1186733 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0803 22:50:05.560546 1186733 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.1
	I0803 22:50:05.561780 1186733 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.30.0
	I0803 22:50:05.563611 1186733 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0803 22:50:05.563697 1186733 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0803 22:50:05.563825 1186733 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-369401
	I0803 22:50:05.593492 1186733 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0803 22:50:05.593524 1186733 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0803 22:50:05.593593 1186733 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-369401
	I0803 22:50:05.593998 1186733 host.go:66] Checking if "addons-369401" exists ...
	I0803 22:50:05.598096 1186733 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0803 22:50:05.598115 1186733 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0803 22:50:05.598178 1186733 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-369401
	I0803 22:50:05.608204 1186733 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.1
	I0803 22:50:05.608425 1186733 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0803 22:50:05.615951 1186733 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.22
	I0803 22:50:05.617009 1186733 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0803 22:50:05.617031 1186733 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0803 22:50:05.617148 1186733 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-369401
	I0803 22:50:05.661585 1186733 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0803 22:50:05.675048 1186733 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0803 22:50:05.675078 1186733 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0803 22:50:05.675147 1186733 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-369401
	I0803 22:50:05.706753 1186733 addons.go:234] Setting addon default-storageclass=true in "addons-369401"
	I0803 22:50:05.706811 1186733 host.go:66] Checking if "addons-369401" exists ...
	I0803 22:50:05.707214 1186733 cli_runner.go:164] Run: docker container inspect addons-369401 --format={{.State.Status}}
	I0803 22:50:05.726614 1186733 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0803 22:50:05.728556 1186733 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34253 SSHKeyPath:/home/jenkins/minikube-integration/19364-1180294/.minikube/machines/addons-369401/id_rsa Username:docker}
	I0803 22:50:05.731630 1186733 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0803 22:50:05.733921 1186733 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0803 22:50:05.734870 1186733 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0803 22:50:05.734899 1186733 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0803 22:50:05.734968 1186733 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-369401
	I0803 22:50:05.736066 1186733 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0803 22:50:05.736103 1186733 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.1
	I0803 22:50:05.740291 1186733 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0803 22:50:05.741677 1186733 out.go:177]   - Using image docker.io/volcanosh/vc-controller-manager:v1.9.0
	I0803 22:50:05.741799 1186733 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0803 22:50:05.741809 1186733 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0803 22:50:05.741880 1186733 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-369401
	I0803 22:50:05.745229 1186733 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0803 22:50:05.745379 1186733 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0803 22:50:05.745826 1186733 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0803 22:50:05.745847 1186733 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0803 22:50:05.745918 1186733 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-369401
	I0803 22:50:05.760506 1186733 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0803 22:50:05.762181 1186733 out.go:177]   - Using image docker.io/volcanosh/vc-scheduler:v1.9.0
	I0803 22:50:05.764763 1186733 out.go:177]   - Using image docker.io/volcanosh/vc-webhook-manager:v1.9.0
	I0803 22:50:05.764876 1186733 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0803 22:50:05.769454 1186733 addons.go:431] installing /etc/kubernetes/addons/volcano-deployment.yaml
	I0803 22:50:05.769481 1186733 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volcano-deployment.yaml (434001 bytes)
	I0803 22:50:05.769553 1186733 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-369401
	I0803 22:50:05.778985 1186733 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0803 22:50:05.779190 1186733 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34253 SSHKeyPath:/home/jenkins/minikube-integration/19364-1180294/.minikube/machines/addons-369401/id_rsa Username:docker}
	I0803 22:50:05.781455 1186733 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0803 22:50:05.781476 1186733 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0803 22:50:05.781541 1186733 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-369401
	I0803 22:50:05.796954 1186733 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0803 22:50:05.800506 1186733 out.go:177]   - Using image docker.io/registry:2.8.3
	I0803 22:50:05.806113 1186733 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0803 22:50:05.808057 1186733 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0803 22:50:05.808087 1186733 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0803 22:50:05.808156 1186733 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-369401
	I0803 22:50:05.808523 1186733 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0803 22:50:05.815024 1186733 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0803 22:50:05.815048 1186733 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0803 22:50:05.815113 1186733 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-369401
	I0803 22:50:05.837244 1186733 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0803 22:50:05.858164 1186733 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-369401"
	I0803 22:50:05.858261 1186733 host.go:66] Checking if "addons-369401" exists ...
	I0803 22:50:05.861754 1186733 cli_runner.go:164] Run: docker container inspect addons-369401 --format={{.State.Status}}
	I0803 22:50:05.918361 1186733 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0803 22:50:05.918383 1186733 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0803 22:50:05.918522 1186733 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-369401
	I0803 22:50:05.918918 1186733 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34253 SSHKeyPath:/home/jenkins/minikube-integration/19364-1180294/.minikube/machines/addons-369401/id_rsa Username:docker}
	I0803 22:50:05.923682 1186733 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34253 SSHKeyPath:/home/jenkins/minikube-integration/19364-1180294/.minikube/machines/addons-369401/id_rsa Username:docker}
	I0803 22:50:05.934290 1186733 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34253 SSHKeyPath:/home/jenkins/minikube-integration/19364-1180294/.minikube/machines/addons-369401/id_rsa Username:docker}
	I0803 22:50:05.934750 1186733 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34253 SSHKeyPath:/home/jenkins/minikube-integration/19364-1180294/.minikube/machines/addons-369401/id_rsa Username:docker}
	I0803 22:50:05.947907 1186733 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34253 SSHKeyPath:/home/jenkins/minikube-integration/19364-1180294/.minikube/machines/addons-369401/id_rsa Username:docker}
	I0803 22:50:05.993151 1186733 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34253 SSHKeyPath:/home/jenkins/minikube-integration/19364-1180294/.minikube/machines/addons-369401/id_rsa Username:docker}
	I0803 22:50:05.995555 1186733 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34253 SSHKeyPath:/home/jenkins/minikube-integration/19364-1180294/.minikube/machines/addons-369401/id_rsa Username:docker}
	I0803 22:50:06.011147 1186733 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34253 SSHKeyPath:/home/jenkins/minikube-integration/19364-1180294/.minikube/machines/addons-369401/id_rsa Username:docker}
	I0803 22:50:06.025041 1186733 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34253 SSHKeyPath:/home/jenkins/minikube-integration/19364-1180294/.minikube/machines/addons-369401/id_rsa Username:docker}
	I0803 22:50:06.026699 1186733 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34253 SSHKeyPath:/home/jenkins/minikube-integration/19364-1180294/.minikube/machines/addons-369401/id_rsa Username:docker}
	W0803 22:50:06.027979 1186733 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0803 22:50:06.028009 1186733 retry.go:31] will retry after 295.93065ms: ssh: handshake failed: EOF
	I0803 22:50:06.029304 1186733 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34253 SSHKeyPath:/home/jenkins/minikube-integration/19364-1180294/.minikube/machines/addons-369401/id_rsa Username:docker}
	I0803 22:50:06.041391 1186733 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0803 22:50:06.043268 1186733 out.go:177]   - Using image docker.io/busybox:stable
	I0803 22:50:06.045046 1186733 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0803 22:50:06.045069 1186733 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0803 22:50:06.045136 1186733 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-369401
	I0803 22:50:06.066605 1186733 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34253 SSHKeyPath:/home/jenkins/minikube-integration/19364-1180294/.minikube/machines/addons-369401/id_rsa Username:docker}
	I0803 22:50:06.471683 1186733 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0803 22:50:06.471753 1186733 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0803 22:50:06.481553 1186733 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0803 22:50:06.481618 1186733 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0803 22:50:06.506707 1186733 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0803 22:50:06.506730 1186733 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0803 22:50:06.545109 1186733 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0803 22:50:06.545177 1186733 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0803 22:50:06.593563 1186733 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0803 22:50:06.593638 1186733 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0803 22:50:06.624964 1186733 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0803 22:50:06.625034 1186733 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0803 22:50:06.628595 1186733 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0803 22:50:06.684392 1186733 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0803 22:50:06.684463 1186733 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0803 22:50:06.701505 1186733 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0803 22:50:06.741986 1186733 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0803 22:50:06.747297 1186733 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0803 22:50:06.754349 1186733 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0803 22:50:06.768519 1186733 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0803 22:50:06.768597 1186733 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0803 22:50:06.771564 1186733 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0803 22:50:06.809545 1186733 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0803 22:50:06.809619 1186733 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0803 22:50:06.817924 1186733 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0803 22:50:06.818002 1186733 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0803 22:50:06.822980 1186733 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0803 22:50:06.823051 1186733 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0803 22:50:06.836942 1186733 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml
	I0803 22:50:06.869581 1186733 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0803 22:50:06.891617 1186733 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0803 22:50:06.891695 1186733 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0803 22:50:06.914972 1186733 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0803 22:50:06.933187 1186733 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0803 22:50:06.933259 1186733 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0803 22:50:06.953747 1186733 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0803 22:50:06.953817 1186733 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0803 22:50:06.991952 1186733 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0803 22:50:06.992021 1186733 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0803 22:50:07.082946 1186733 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0803 22:50:07.083020 1186733 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0803 22:50:07.124395 1186733 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0803 22:50:07.124466 1186733 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0803 22:50:07.262936 1186733 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0803 22:50:07.263013 1186733 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0803 22:50:07.325647 1186733 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0803 22:50:07.325703 1186733 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0803 22:50:07.394797 1186733 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0803 22:50:07.394874 1186733 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0803 22:50:07.402280 1186733 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0803 22:50:07.402354 1186733 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0803 22:50:07.494357 1186733 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0803 22:50:07.501141 1186733 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0803 22:50:07.659095 1186733 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0803 22:50:07.659165 1186733 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0803 22:50:07.676405 1186733 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0803 22:50:07.676483 1186733 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0803 22:50:07.952542 1186733 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0803 22:50:07.952616 1186733 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0803 22:50:07.982435 1186733 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (2.145158563s)
	I0803 22:50:07.982742 1186733 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.203717747s)
	I0803 22:50:07.982784 1186733 start.go:971] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0803 22:50:07.984253 1186733 node_ready.go:35] waiting up to 6m0s for node "addons-369401" to be "Ready" ...
	I0803 22:50:07.989273 1186733 node_ready.go:49] node "addons-369401" has status "Ready":"True"
	I0803 22:50:07.989297 1186733 node_ready.go:38] duration metric: took 4.990524ms for node "addons-369401" to be "Ready" ...
	I0803 22:50:07.989308 1186733 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0803 22:50:08.000694 1186733 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-bxjfr" in "kube-system" namespace to be "Ready" ...
	I0803 22:50:08.013211 1186733 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0803 22:50:08.086893 1186733 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0803 22:50:08.086979 1186733 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0803 22:50:08.418160 1186733 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0803 22:50:08.418221 1186733 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0803 22:50:08.431304 1186733 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0803 22:50:08.431363 1186733 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0803 22:50:08.491278 1186733 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-369401" context rescaled to 1 replicas
	I0803 22:50:08.521909 1186733 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (1.893240262s)
	I0803 22:50:08.581476 1186733 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0803 22:50:08.581623 1186733 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0803 22:50:08.660354 1186733 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0803 22:50:08.738435 1186733 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0803 22:50:08.738505 1186733 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0803 22:50:09.297842 1186733 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0803 22:50:09.297910 1186733 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0803 22:50:09.986061 1186733 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0803 22:50:09.986135 1186733 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0803 22:50:10.009391 1186733 pod_ready.go:102] pod "coredns-7db6d8ff4d-bxjfr" in "kube-system" namespace has status "Ready":"False"
	I0803 22:50:10.384675 1186733 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0803 22:50:10.781208 1186733 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.079671524s)
	I0803 22:50:11.783759 1186733 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (5.041688415s)
	I0803 22:50:11.783805 1186733 addons.go:475] Verifying addon metrics-server=true in "addons-369401"
	I0803 22:50:11.783859 1186733 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (5.036493902s)
	I0803 22:50:11.784029 1186733 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (5.029621568s)
	I0803 22:50:11.784132 1186733 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (5.012506907s)
	W0803 22:50:11.804775 1186733 out.go:239] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error while marking storage class local-path as non-default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I0803 22:50:12.011483 1186733 pod_ready.go:102] pod "coredns-7db6d8ff4d-bxjfr" in "kube-system" namespace has status "Ready":"False"
	I0803 22:50:12.871126 1186733 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0803 22:50:12.871215 1186733 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-369401
	I0803 22:50:12.893827 1186733 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34253 SSHKeyPath:/home/jenkins/minikube-integration/19364-1180294/.minikube/machines/addons-369401/id_rsa Username:docker}
	I0803 22:50:13.539797 1186733 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0803 22:50:13.708577 1186733 addons.go:234] Setting addon gcp-auth=true in "addons-369401"
	I0803 22:50:13.708703 1186733 host.go:66] Checking if "addons-369401" exists ...
	I0803 22:50:13.709438 1186733 cli_runner.go:164] Run: docker container inspect addons-369401 --format={{.State.Status}}
	I0803 22:50:13.745116 1186733 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0803 22:50:13.745176 1186733 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-369401
	I0803 22:50:13.780640 1186733 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34253 SSHKeyPath:/home/jenkins/minikube-integration/19364-1180294/.minikube/machines/addons-369401/id_rsa Username:docker}
	I0803 22:50:14.013334 1186733 pod_ready.go:102] pod "coredns-7db6d8ff4d-bxjfr" in "kube-system" namespace has status "Ready":"False"
	I0803 22:50:16.015021 1186733 pod_ready.go:102] pod "coredns-7db6d8ff4d-bxjfr" in "kube-system" namespace has status "Ready":"False"
	I0803 22:50:16.266527 1186733 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml: (9.429497969s)
	I0803 22:50:16.266738 1186733 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (9.397086655s)
	I0803 22:50:16.266778 1186733 addons.go:475] Verifying addon ingress=true in "addons-369401"
	I0803 22:50:16.267026 1186733 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (9.351983177s)
	I0803 22:50:16.267221 1186733 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (8.772792566s)
	I0803 22:50:16.267270 1186733 addons.go:475] Verifying addon registry=true in "addons-369401"
	I0803 22:50:16.267424 1186733 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (8.76620757s)
	W0803 22:50:16.268082 1186733 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0803 22:50:16.268102 1186733 retry.go:31] will retry after 371.698653ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0803 22:50:16.267459 1186733 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (8.254175332s)
	I0803 22:50:16.267512 1186733 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (7.607085605s)
	I0803 22:50:16.280293 1186733 out.go:177] * Verifying ingress addon...
	I0803 22:50:16.282676 1186733 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-369401 service yakd-dashboard -n yakd-dashboard
	
	I0803 22:50:16.285746 1186733 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0803 22:50:16.282747 1186733 out.go:177] * Verifying registry addon...
	I0803 22:50:16.288658 1186733 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0803 22:50:16.312814 1186733 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0803 22:50:16.312835 1186733 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:50:16.315892 1186733 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0803 22:50:16.315960 1186733 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0803 22:50:16.640665 1186733 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0803 22:50:16.793622 1186733 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:50:16.797382 1186733 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0803 22:50:17.069949 1186733 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (3.324766491s)
	I0803 22:50:17.070120 1186733 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (6.685401071s)
	I0803 22:50:17.070291 1186733 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-369401"
	I0803 22:50:17.072516 1186733 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0803 22:50:17.072627 1186733 out.go:177] * Verifying csi-hostpath-driver addon...
	I0803 22:50:17.076309 1186733 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0803 22:50:17.077329 1186733 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0803 22:50:17.078659 1186733 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0803 22:50:17.078705 1186733 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0803 22:50:17.112302 1186733 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0803 22:50:17.112372 1186733 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 22:50:17.183501 1186733 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0803 22:50:17.183572 1186733 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0803 22:50:17.202504 1186733 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0803 22:50:17.202565 1186733 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0803 22:50:17.221659 1186733 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0803 22:50:17.291559 1186733 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:50:17.307202 1186733 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0803 22:50:17.583746 1186733 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 22:50:17.791558 1186733 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:50:17.795500 1186733 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0803 22:50:18.091103 1186733 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 22:50:18.283190 1186733 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.642417033s)
	I0803 22:50:18.283323 1186733 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.061595376s)
	I0803 22:50:18.286894 1186733 addons.go:475] Verifying addon gcp-auth=true in "addons-369401"
	I0803 22:50:18.289437 1186733 out.go:177] * Verifying gcp-auth addon...
	I0803 22:50:18.292274 1186733 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:50:18.293299 1186733 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0803 22:50:18.294317 1186733 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0803 22:50:18.295599 1186733 kapi.go:86] Found 0 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0803 22:50:18.507815 1186733 pod_ready.go:102] pod "coredns-7db6d8ff4d-bxjfr" in "kube-system" namespace has status "Ready":"False"
	I0803 22:50:18.583760 1186733 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 22:50:18.790089 1186733 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:50:18.793718 1186733 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0803 22:50:19.083872 1186733 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 22:50:19.291059 1186733 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:50:19.298044 1186733 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0803 22:50:19.584156 1186733 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 22:50:19.791901 1186733 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:50:19.805897 1186733 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0803 22:50:20.084398 1186733 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 22:50:20.295545 1186733 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0803 22:50:20.295757 1186733 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:50:20.508136 1186733 pod_ready.go:102] pod "coredns-7db6d8ff4d-bxjfr" in "kube-system" namespace has status "Ready":"False"
	I0803 22:50:20.585528 1186733 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 22:50:20.800336 1186733 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:50:20.802206 1186733 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0803 22:50:21.127645 1186733 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 22:50:21.292572 1186733 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:50:21.298999 1186733 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0803 22:50:21.585895 1186733 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 22:50:21.794988 1186733 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0803 22:50:21.796297 1186733 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:50:22.083078 1186733 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 22:50:22.290290 1186733 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:50:22.294408 1186733 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0803 22:50:22.582803 1186733 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 22:50:22.791162 1186733 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:50:22.795348 1186733 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0803 22:50:23.010001 1186733 pod_ready.go:102] pod "coredns-7db6d8ff4d-bxjfr" in "kube-system" namespace has status "Ready":"False"
	I0803 22:50:23.083836 1186733 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 22:50:23.291443 1186733 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:50:23.295282 1186733 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0803 22:50:23.582767 1186733 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 22:50:23.790803 1186733 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:50:23.794211 1186733 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0803 22:50:24.085945 1186733 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 22:50:24.290055 1186733 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:50:24.293879 1186733 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0803 22:50:24.582763 1186733 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 22:50:24.791022 1186733 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:50:24.794293 1186733 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0803 22:50:25.013009 1186733 pod_ready.go:102] pod "coredns-7db6d8ff4d-bxjfr" in "kube-system" namespace has status "Ready":"False"
	I0803 22:50:25.084339 1186733 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 22:50:25.292280 1186733 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:50:25.298740 1186733 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0803 22:50:25.585913 1186733 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 22:50:25.792110 1186733 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:50:25.805820 1186733 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0803 22:50:26.084789 1186733 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 22:50:26.292467 1186733 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:50:26.296908 1186733 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0803 22:50:26.583262 1186733 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 22:50:26.791062 1186733 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:50:26.794749 1186733 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0803 22:50:27.084024 1186733 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 22:50:27.290672 1186733 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:50:27.293883 1186733 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0803 22:50:27.510065 1186733 pod_ready.go:102] pod "coredns-7db6d8ff4d-bxjfr" in "kube-system" namespace has status "Ready":"False"
	I0803 22:50:27.583708 1186733 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 22:50:27.792277 1186733 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:50:27.796149 1186733 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0803 22:50:28.083656 1186733 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 22:50:28.289637 1186733 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:50:28.293432 1186733 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0803 22:50:28.583197 1186733 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 22:50:28.790538 1186733 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:50:28.794512 1186733 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0803 22:50:29.083715 1186733 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 22:50:29.290642 1186733 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:50:29.294343 1186733 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0803 22:50:29.583158 1186733 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 22:50:29.790281 1186733 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:50:29.792760 1186733 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0803 22:50:30.040565 1186733 pod_ready.go:102] pod "coredns-7db6d8ff4d-bxjfr" in "kube-system" namespace has status "Ready":"False"
	I0803 22:50:30.086359 1186733 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 22:50:30.294180 1186733 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:50:30.294931 1186733 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0803 22:50:30.582680 1186733 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 22:50:30.789486 1186733 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:50:30.797634 1186733 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0803 22:50:31.096665 1186733 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 22:50:31.291926 1186733 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:50:31.297598 1186733 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0803 22:50:31.583215 1186733 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 22:50:31.790660 1186733 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:50:31.796152 1186733 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0803 22:50:32.083727 1186733 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 22:50:32.290247 1186733 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:50:32.293065 1186733 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0803 22:50:32.510069 1186733 pod_ready.go:102] pod "coredns-7db6d8ff4d-bxjfr" in "kube-system" namespace has status "Ready":"False"
	I0803 22:50:32.583368 1186733 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 22:50:32.791738 1186733 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:50:32.799598 1186733 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0803 22:50:33.084556 1186733 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 22:50:33.291382 1186733 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:50:33.298012 1186733 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0803 22:50:33.582729 1186733 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 22:50:33.792811 1186733 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:50:33.793035 1186733 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0803 22:50:34.011274 1186733 pod_ready.go:92] pod "coredns-7db6d8ff4d-bxjfr" in "kube-system" namespace has status "Ready":"True"
	I0803 22:50:34.011304 1186733 pod_ready.go:81] duration metric: took 26.010384435s for pod "coredns-7db6d8ff4d-bxjfr" in "kube-system" namespace to be "Ready" ...
	I0803 22:50:34.011317 1186733 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-xwss5" in "kube-system" namespace to be "Ready" ...
	I0803 22:50:34.013783 1186733 pod_ready.go:97] error getting pod "coredns-7db6d8ff4d-xwss5" in "kube-system" namespace (skipping!): pods "coredns-7db6d8ff4d-xwss5" not found
	I0803 22:50:34.013814 1186733 pod_ready.go:81] duration metric: took 2.486916ms for pod "coredns-7db6d8ff4d-xwss5" in "kube-system" namespace to be "Ready" ...
	E0803 22:50:34.013826 1186733 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-7db6d8ff4d-xwss5" in "kube-system" namespace (skipping!): pods "coredns-7db6d8ff4d-xwss5" not found
	I0803 22:50:34.013836 1186733 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-369401" in "kube-system" namespace to be "Ready" ...
	I0803 22:50:34.020590 1186733 pod_ready.go:92] pod "etcd-addons-369401" in "kube-system" namespace has status "Ready":"True"
	I0803 22:50:34.020614 1186733 pod_ready.go:81] duration metric: took 6.769971ms for pod "etcd-addons-369401" in "kube-system" namespace to be "Ready" ...
	I0803 22:50:34.020629 1186733 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-369401" in "kube-system" namespace to be "Ready" ...
	I0803 22:50:34.026620 1186733 pod_ready.go:92] pod "kube-apiserver-addons-369401" in "kube-system" namespace has status "Ready":"True"
	I0803 22:50:34.026647 1186733 pod_ready.go:81] duration metric: took 6.010393ms for pod "kube-apiserver-addons-369401" in "kube-system" namespace to be "Ready" ...
	I0803 22:50:34.026660 1186733 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-369401" in "kube-system" namespace to be "Ready" ...
	I0803 22:50:34.032321 1186733 pod_ready.go:92] pod "kube-controller-manager-addons-369401" in "kube-system" namespace has status "Ready":"True"
	I0803 22:50:34.032349 1186733 pod_ready.go:81] duration metric: took 5.680898ms for pod "kube-controller-manager-addons-369401" in "kube-system" namespace to be "Ready" ...
	I0803 22:50:34.032362 1186733 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-5n228" in "kube-system" namespace to be "Ready" ...
	I0803 22:50:34.087677 1186733 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 22:50:34.205907 1186733 pod_ready.go:92] pod "kube-proxy-5n228" in "kube-system" namespace has status "Ready":"True"
	I0803 22:50:34.205934 1186733 pod_ready.go:81] duration metric: took 173.563747ms for pod "kube-proxy-5n228" in "kube-system" namespace to be "Ready" ...
	I0803 22:50:34.205948 1186733 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-369401" in "kube-system" namespace to be "Ready" ...
	I0803 22:50:34.290081 1186733 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:50:34.293773 1186733 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0803 22:50:34.584518 1186733 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 22:50:34.606084 1186733 pod_ready.go:92] pod "kube-scheduler-addons-369401" in "kube-system" namespace has status "Ready":"True"
	I0803 22:50:34.606117 1186733 pod_ready.go:81] duration metric: took 400.159921ms for pod "kube-scheduler-addons-369401" in "kube-system" namespace to be "Ready" ...
	I0803 22:50:34.606128 1186733 pod_ready.go:38] duration metric: took 26.616808854s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0803 22:50:34.606144 1186733 api_server.go:52] waiting for apiserver process to appear ...
	I0803 22:50:34.606222 1186733 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0803 22:50:34.635889 1186733 api_server.go:72] duration metric: took 29.233807851s to wait for apiserver process to appear ...
	I0803 22:50:34.635918 1186733 api_server.go:88] waiting for apiserver healthz status ...
	I0803 22:50:34.635952 1186733 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0803 22:50:34.645214 1186733 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0803 22:50:34.646720 1186733 api_server.go:141] control plane version: v1.30.3
	I0803 22:50:34.646750 1186733 api_server.go:131] duration metric: took 10.821813ms to wait for apiserver health ...
	I0803 22:50:34.646759 1186733 system_pods.go:43] waiting for kube-system pods to appear ...
	I0803 22:50:34.789772 1186733 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:50:34.793573 1186733 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0803 22:50:34.812493 1186733 system_pods.go:59] 18 kube-system pods found
	I0803 22:50:34.812529 1186733 system_pods.go:61] "coredns-7db6d8ff4d-bxjfr" [3f8a3e5c-4cc4-4993-bc38-ecb9fe71008e] Running
	I0803 22:50:34.812540 1186733 system_pods.go:61] "csi-hostpath-attacher-0" [6f7cd3bb-1f79-44c2-abc4-2895ce71de1c] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0803 22:50:34.812548 1186733 system_pods.go:61] "csi-hostpath-resizer-0" [931a8f6b-6ebd-4a05-a060-8fd165150f3f] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0803 22:50:34.812556 1186733 system_pods.go:61] "csi-hostpathplugin-zmdnt" [021f3ee1-5d9c-4609-aa5d-77644a18ec94] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0803 22:50:34.812560 1186733 system_pods.go:61] "etcd-addons-369401" [67a83b52-d742-4980-bd48-00d6746a5c67] Running
	I0803 22:50:34.812565 1186733 system_pods.go:61] "kindnet-5nntp" [a0113cb8-6205-4e14-a9db-0dbf07c4c8cb] Running
	I0803 22:50:34.812570 1186733 system_pods.go:61] "kube-apiserver-addons-369401" [df3deefc-24ef-4209-a3b9-4f2016cc479b] Running
	I0803 22:50:34.812575 1186733 system_pods.go:61] "kube-controller-manager-addons-369401" [430cc6ab-85a1-4306-9fa3-f01c8b330e87] Running
	I0803 22:50:34.812587 1186733 system_pods.go:61] "kube-ingress-dns-minikube" [a3fd8212-a3a1-4bc6-872f-6cc1574fa79f] Running
	I0803 22:50:34.812591 1186733 system_pods.go:61] "kube-proxy-5n228" [59b3601b-8cb0-4dc5-9a17-fbeb06f93da1] Running
	I0803 22:50:34.812601 1186733 system_pods.go:61] "kube-scheduler-addons-369401" [14255b4a-2e50-4565-b422-2c05eaf22e3d] Running
	I0803 22:50:34.812607 1186733 system_pods.go:61] "metrics-server-c59844bb4-6p6m4" [1cbf5238-59c7-45d1-bf04-fa4f91293308] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0803 22:50:34.812612 1186733 system_pods.go:61] "nvidia-device-plugin-daemonset-zhsj7" [124c1c15-5fe2-428e-b69b-b3114c15552e] Running
	I0803 22:50:34.812618 1186733 system_pods.go:61] "registry-698f998955-d9572" [a38ebabe-72ac-412d-b25d-2bdc0ed934b1] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0803 22:50:34.812629 1186733 system_pods.go:61] "registry-proxy-tc486" [2188b0ce-6a1e-4e35-adaa-d8123d8321cb] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0803 22:50:34.812637 1186733 system_pods.go:61] "snapshot-controller-745499f584-xhrjl" [83684ab3-d54f-4d91-9ef8-813256921864] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0803 22:50:34.812654 1186733 system_pods.go:61] "snapshot-controller-745499f584-xl2t9" [6c1c04eb-4239-41c6-a6de-57b5f1f5f369] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0803 22:50:34.812659 1186733 system_pods.go:61] "storage-provisioner" [92f1d7ff-34ea-4356-a94f-3f6695c2bcb8] Running
	I0803 22:50:34.812665 1186733 system_pods.go:74] duration metric: took 165.900789ms to wait for pod list to return data ...
	I0803 22:50:34.812674 1186733 default_sa.go:34] waiting for default service account to be created ...
	I0803 22:50:35.014934 1186733 default_sa.go:45] found service account: "default"
	I0803 22:50:35.014961 1186733 default_sa.go:55] duration metric: took 202.275257ms for default service account to be created ...
	I0803 22:50:35.014974 1186733 system_pods.go:116] waiting for k8s-apps to be running ...
	I0803 22:50:35.083475 1186733 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 22:50:35.214016 1186733 system_pods.go:86] 18 kube-system pods found
	I0803 22:50:35.214051 1186733 system_pods.go:89] "coredns-7db6d8ff4d-bxjfr" [3f8a3e5c-4cc4-4993-bc38-ecb9fe71008e] Running
	I0803 22:50:35.214062 1186733 system_pods.go:89] "csi-hostpath-attacher-0" [6f7cd3bb-1f79-44c2-abc4-2895ce71de1c] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0803 22:50:35.214070 1186733 system_pods.go:89] "csi-hostpath-resizer-0" [931a8f6b-6ebd-4a05-a060-8fd165150f3f] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0803 22:50:35.214078 1186733 system_pods.go:89] "csi-hostpathplugin-zmdnt" [021f3ee1-5d9c-4609-aa5d-77644a18ec94] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0803 22:50:35.214083 1186733 system_pods.go:89] "etcd-addons-369401" [67a83b52-d742-4980-bd48-00d6746a5c67] Running
	I0803 22:50:35.214088 1186733 system_pods.go:89] "kindnet-5nntp" [a0113cb8-6205-4e14-a9db-0dbf07c4c8cb] Running
	I0803 22:50:35.214092 1186733 system_pods.go:89] "kube-apiserver-addons-369401" [df3deefc-24ef-4209-a3b9-4f2016cc479b] Running
	I0803 22:50:35.214098 1186733 system_pods.go:89] "kube-controller-manager-addons-369401" [430cc6ab-85a1-4306-9fa3-f01c8b330e87] Running
	I0803 22:50:35.214102 1186733 system_pods.go:89] "kube-ingress-dns-minikube" [a3fd8212-a3a1-4bc6-872f-6cc1574fa79f] Running
	I0803 22:50:35.214107 1186733 system_pods.go:89] "kube-proxy-5n228" [59b3601b-8cb0-4dc5-9a17-fbeb06f93da1] Running
	I0803 22:50:35.214112 1186733 system_pods.go:89] "kube-scheduler-addons-369401" [14255b4a-2e50-4565-b422-2c05eaf22e3d] Running
	I0803 22:50:35.214118 1186733 system_pods.go:89] "metrics-server-c59844bb4-6p6m4" [1cbf5238-59c7-45d1-bf04-fa4f91293308] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0803 22:50:35.214122 1186733 system_pods.go:89] "nvidia-device-plugin-daemonset-zhsj7" [124c1c15-5fe2-428e-b69b-b3114c15552e] Running
	I0803 22:50:35.214132 1186733 system_pods.go:89] "registry-698f998955-d9572" [a38ebabe-72ac-412d-b25d-2bdc0ed934b1] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0803 22:50:35.214139 1186733 system_pods.go:89] "registry-proxy-tc486" [2188b0ce-6a1e-4e35-adaa-d8123d8321cb] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0803 22:50:35.214147 1186733 system_pods.go:89] "snapshot-controller-745499f584-xhrjl" [83684ab3-d54f-4d91-9ef8-813256921864] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0803 22:50:35.214154 1186733 system_pods.go:89] "snapshot-controller-745499f584-xl2t9" [6c1c04eb-4239-41c6-a6de-57b5f1f5f369] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0803 22:50:35.214159 1186733 system_pods.go:89] "storage-provisioner" [92f1d7ff-34ea-4356-a94f-3f6695c2bcb8] Running
	I0803 22:50:35.214166 1186733 system_pods.go:126] duration metric: took 199.186307ms to wait for k8s-apps to be running ...
	I0803 22:50:35.214173 1186733 system_svc.go:44] waiting for kubelet service to be running ....
	I0803 22:50:35.214232 1186733 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0803 22:50:35.234108 1186733 system_svc.go:56] duration metric: took 19.92482ms WaitForService to wait for kubelet
	I0803 22:50:35.234151 1186733 kubeadm.go:582] duration metric: took 29.832074434s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0803 22:50:35.234173 1186733 node_conditions.go:102] verifying NodePressure condition ...
	I0803 22:50:35.290879 1186733 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:50:35.297250 1186733 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0803 22:50:35.405423 1186733 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0803 22:50:35.405455 1186733 node_conditions.go:123] node cpu capacity is 2
	I0803 22:50:35.405475 1186733 node_conditions.go:105] duration metric: took 171.289145ms to run NodePressure ...
	I0803 22:50:35.405488 1186733 start.go:241] waiting for startup goroutines ...
	I0803 22:50:35.584566 1186733 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 22:50:35.791524 1186733 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:50:35.796775 1186733 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0803 22:50:36.103110 1186733 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 22:50:36.292318 1186733 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:50:36.295923 1186733 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0803 22:50:36.583238 1186733 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 22:50:36.790734 1186733 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:50:36.796213 1186733 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0803 22:50:37.083981 1186733 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 22:50:37.291403 1186733 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:50:37.294595 1186733 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0803 22:50:37.583356 1186733 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 22:50:37.790818 1186733 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:50:37.794071 1186733 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0803 22:50:38.083915 1186733 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 22:50:38.290869 1186733 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:50:38.295976 1186733 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0803 22:50:38.583998 1186733 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 22:50:38.789994 1186733 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:50:38.793940 1186733 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0803 22:50:39.084175 1186733 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 22:50:39.292234 1186733 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:50:39.294399 1186733 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0803 22:50:39.584657 1186733 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 22:50:39.792223 1186733 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:50:39.795498 1186733 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0803 22:50:40.084544 1186733 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 22:50:40.298406 1186733 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:50:40.304335 1186733 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0803 22:50:40.584108 1186733 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 22:50:40.791783 1186733 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:50:40.796476 1186733 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0803 22:50:41.094755 1186733 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 22:50:41.304055 1186733 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0803 22:50:41.311500 1186733 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:50:41.584184 1186733 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 22:50:41.792052 1186733 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:50:41.796377 1186733 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0803 22:50:42.085820 1186733 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 22:50:42.297750 1186733 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:50:42.308956 1186733 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0803 22:50:42.583481 1186733 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 22:50:42.790798 1186733 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:50:42.792803 1186733 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0803 22:50:43.086458 1186733 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 22:50:43.293577 1186733 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:50:43.295057 1186733 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0803 22:50:43.583361 1186733 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 22:50:43.790754 1186733 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:50:43.798461 1186733 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0803 22:50:44.082730 1186733 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 22:50:44.291426 1186733 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:50:44.294972 1186733 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0803 22:50:44.584302 1186733 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 22:50:44.791314 1186733 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:50:44.800599 1186733 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0803 22:50:45.087354 1186733 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 22:50:45.291411 1186733 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:50:45.294887 1186733 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0803 22:50:45.582773 1186733 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 22:50:45.791317 1186733 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:50:45.793121 1186733 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0803 22:50:46.083478 1186733 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 22:50:46.290225 1186733 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:50:46.294350 1186733 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0803 22:50:46.583753 1186733 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 22:50:46.796642 1186733 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:50:46.797851 1186733 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0803 22:50:47.084166 1186733 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 22:50:47.289972 1186733 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:50:47.294142 1186733 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0803 22:50:47.583003 1186733 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 22:50:47.790748 1186733 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:50:47.793855 1186733 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0803 22:50:48.084029 1186733 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 22:50:48.291271 1186733 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:50:48.296660 1186733 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0803 22:50:48.583370 1186733 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 22:50:48.791670 1186733 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:50:48.793475 1186733 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0803 22:50:49.084541 1186733 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 22:50:49.291252 1186733 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:50:49.298639 1186733 kapi.go:107] duration metric: took 33.009975861s to wait for kubernetes.io/minikube-addons=registry ...
	I0803 22:50:49.583275 1186733 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 22:50:49.790799 1186733 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:50:50.084084 1186733 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 22:50:50.296989 1186733 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:50:50.582584 1186733 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 22:50:50.791547 1186733 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:50:51.086065 1186733 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 22:50:51.290980 1186733 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:50:51.584158 1186733 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 22:50:51.790112 1186733 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:50:52.087534 1186733 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 22:50:52.290674 1186733 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:50:52.583229 1186733 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 22:50:52.790786 1186733 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:50:53.106169 1186733 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 22:50:53.290082 1186733 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:50:53.583026 1186733 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 22:50:53.790085 1186733 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:50:54.084908 1186733 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 22:50:54.290576 1186733 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:50:54.589660 1186733 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 22:50:54.791126 1186733 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:50:55.084057 1186733 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 22:50:55.292030 1186733 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:50:55.582729 1186733 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 22:50:55.790581 1186733 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:50:56.082469 1186733 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 22:50:56.290103 1186733 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:50:56.583494 1186733 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 22:50:56.789970 1186733 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:50:57.082903 1186733 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 22:50:57.290396 1186733 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:50:57.583987 1186733 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 22:50:57.790588 1186733 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:50:58.084553 1186733 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 22:50:58.291262 1186733 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:50:58.582823 1186733 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 22:50:58.790535 1186733 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:50:59.090097 1186733 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 22:50:59.290394 1186733 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:50:59.583665 1186733 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 22:50:59.791107 1186733 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:51:00.103297 1186733 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 22:51:00.304696 1186733 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:51:00.584053 1186733 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 22:51:00.790652 1186733 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:51:01.082491 1186733 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 22:51:01.300301 1186733 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:51:01.585439 1186733 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 22:51:01.790306 1186733 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:51:02.083170 1186733 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 22:51:02.291101 1186733 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:51:02.583215 1186733 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 22:51:02.793142 1186733 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:51:03.085684 1186733 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 22:51:03.290107 1186733 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:51:03.584453 1186733 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 22:51:03.791086 1186733 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:51:04.084252 1186733 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 22:51:04.291512 1186733 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:51:04.583459 1186733 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 22:51:04.790715 1186733 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:51:05.093793 1186733 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 22:51:05.289971 1186733 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:51:05.583605 1186733 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 22:51:05.790566 1186733 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:51:06.084402 1186733 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 22:51:06.290332 1186733 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:51:06.582993 1186733 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 22:51:06.791608 1186733 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:51:07.083777 1186733 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 22:51:07.290855 1186733 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:51:07.583343 1186733 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 22:51:07.790919 1186733 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:51:08.083061 1186733 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0803 22:51:08.290606 1186733 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:51:08.582545 1186733 kapi.go:107] duration metric: took 51.505211791s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0803 22:51:08.789747 1186733 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:51:09.290326 1186733 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:51:09.789833 1186733 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:51:10.290951 1186733 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:51:10.790303 1186733 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:51:11.290026 1186733 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:51:11.790147 1186733 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:51:12.289890 1186733 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:51:12.790782 1186733 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:51:13.290476 1186733 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:51:13.789792 1186733 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:51:14.290885 1186733 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:51:14.790888 1186733 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:51:15.291317 1186733 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:51:15.790671 1186733 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:51:16.290388 1186733 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:51:16.790879 1186733 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:51:17.290240 1186733 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:51:17.789703 1186733 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:51:18.292480 1186733 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:51:18.789985 1186733 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:51:19.289979 1186733 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:51:19.790006 1186733 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:51:20.294675 1186733 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:51:20.790276 1186733 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:51:21.290069 1186733 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:51:21.790249 1186733 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:51:22.291320 1186733 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:51:22.790858 1186733 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:51:23.298938 1186733 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:51:23.791037 1186733 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0803 22:51:24.304348 1186733 kapi.go:107] duration metric: took 1m8.018600085s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0803 22:51:41.297048 1186733 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0803 22:51:41.297078 1186733 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:51:41.796811 1186733 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:51:42.297702 1186733 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:51:42.796953 1186733 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:51:43.296394 1186733 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:51:43.796608 1186733 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:51:44.297313 1186733 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:51:44.797094 1186733 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:51:45.298776 1186733 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:51:45.796585 1186733 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:51:46.297426 1186733 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:51:46.797608 1186733 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:51:47.297363 1186733 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:51:47.796538 1186733 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:51:48.297612 1186733 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:51:48.796884 1186733 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:51:49.296797 1186733 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:51:49.797478 1186733 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:51:50.296638 1186733 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:51:50.797654 1186733 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:51:51.296646 1186733 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:51:51.797259 1186733 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:51:52.297501 1186733 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:51:52.796669 1186733 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:51:53.297104 1186733 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:51:53.796895 1186733 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:51:54.297299 1186733 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:51:54.797226 1186733 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:51:55.296759 1186733 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:51:55.797023 1186733 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:51:56.296939 1186733 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:51:56.796572 1186733 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:51:57.297342 1186733 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:51:57.797217 1186733 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:51:58.296601 1186733 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:51:58.798029 1186733 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:51:59.296711 1186733 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:51:59.797097 1186733 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:52:00.299325 1186733 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:52:00.796559 1186733 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:52:01.297677 1186733 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:52:01.796816 1186733 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:52:02.296525 1186733 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:52:02.796609 1186733 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:52:03.296987 1186733 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:52:03.797242 1186733 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:52:04.296674 1186733 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:52:04.796961 1186733 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:52:05.296826 1186733 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:52:05.796759 1186733 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:52:06.297105 1186733 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:52:06.798195 1186733 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:52:07.297840 1186733 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:52:07.797233 1186733 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:52:08.296944 1186733 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:52:08.796768 1186733 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:52:09.297031 1186733 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:52:09.797041 1186733 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:52:10.296858 1186733 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:52:10.797079 1186733 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:52:11.297057 1186733 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:52:11.797675 1186733 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:52:12.296960 1186733 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:52:12.796835 1186733 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:52:13.298154 1186733 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:52:13.796809 1186733 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:52:14.296598 1186733 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:52:14.797310 1186733 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:52:15.297218 1186733 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:52:15.797066 1186733 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:52:16.297131 1186733 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:52:16.797998 1186733 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:52:17.297215 1186733 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:52:17.797572 1186733 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:52:18.297213 1186733 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:52:18.796412 1186733 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:52:19.296997 1186733 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:52:19.797110 1186733 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:52:20.296643 1186733 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:52:20.797407 1186733 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:52:21.296606 1186733 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:52:21.798514 1186733 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:52:22.297366 1186733 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:52:22.796978 1186733 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:52:23.296940 1186733 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:52:23.796981 1186733 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:52:24.296805 1186733 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:52:24.799981 1186733 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:52:25.296778 1186733 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:52:25.796644 1186733 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:52:26.296845 1186733 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:52:26.796802 1186733 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:52:27.297302 1186733 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:52:27.796665 1186733 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:52:28.297776 1186733 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:52:28.797162 1186733 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:52:29.296881 1186733 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:52:29.796604 1186733 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:52:30.297635 1186733 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:52:30.797307 1186733 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:52:31.297040 1186733 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:52:31.796920 1186733 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:52:32.296793 1186733 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:52:32.797271 1186733 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:52:33.296471 1186733 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:52:33.797726 1186733 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:52:34.297034 1186733 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:52:34.796555 1186733 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:52:35.296870 1186733 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:52:35.797218 1186733 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:52:36.297184 1186733 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:52:36.796993 1186733 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:52:37.297687 1186733 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:52:37.797961 1186733 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:52:38.297746 1186733 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:52:38.797644 1186733 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:52:39.297220 1186733 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:52:39.797001 1186733 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:52:40.296626 1186733 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:52:40.798108 1186733 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:52:41.296946 1186733 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:52:41.796657 1186733 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:52:42.298667 1186733 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:52:42.797119 1186733 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:52:43.297019 1186733 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:52:43.804896 1186733 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:52:44.296512 1186733 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:52:44.797552 1186733 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:52:45.298004 1186733 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:52:45.796950 1186733 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:52:46.297132 1186733 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:52:46.804444 1186733 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:52:47.296674 1186733 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0803 22:52:47.797588 1186733 kapi.go:107] duration metric: took 2m29.50428441s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0803 22:52:47.799472 1186733 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-369401 cluster.
	I0803 22:52:47.802218 1186733 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0803 22:52:47.804429 1186733 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0803 22:52:47.806467 1186733 out.go:177] * Enabled addons: nvidia-device-plugin, storage-provisioner, metrics-server, cloud-spanner, storage-provisioner-rancher, volcano, ingress-dns, inspektor-gadget, yakd, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I0803 22:52:47.808474 1186733 addons.go:510] duration metric: took 2m42.406125298s for enable addons: enabled=[nvidia-device-plugin storage-provisioner metrics-server cloud-spanner storage-provisioner-rancher volcano ingress-dns inspektor-gadget yakd volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I0803 22:52:47.808526 1186733 start.go:246] waiting for cluster config update ...
	I0803 22:52:47.808549 1186733 start.go:255] writing updated cluster config ...
	I0803 22:52:47.808887 1186733 ssh_runner.go:195] Run: rm -f paused
	I0803 22:52:48.154814 1186733 start.go:600] kubectl: 1.30.3, cluster: 1.30.3 (minor skew: 0)
	I0803 22:52:48.156591 1186733 out.go:177] * Done! kubectl is now configured to use "addons-369401" cluster and "default" namespace by default
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                                     ATTEMPT             POD ID              POD
	fb7d90dfa5f1d       d1ca868ab82aa       2 minutes ago       Exited              gadget                                   5                   0222b36184830       gadget-fwqrb
	19a5bf7843612       6ef582f3ec844       3 minutes ago       Running             gcp-auth                                 0                   e377ff26f6ff4       gcp-auth-5db96cd9b4-fq9cd
	6ed62dec2c65a       8b46b1cd48760       4 minutes ago       Running             admission                                0                   145d6ef065a6e       volcano-admission-5f7844f7bc-dr4dc
	d40f24dacc2dd       24f8f979639f1       4 minutes ago       Running             controller                               0                   60f4844592bf9       ingress-nginx-controller-6d9bd977d4-vxjwc
	bce94a09ad00f       ee6d597e62dc8       4 minutes ago       Running             csi-snapshotter                          0                   0bdec948f92be       csi-hostpathplugin-zmdnt
	2653fa11c2718       642ded511e141       5 minutes ago       Running             csi-provisioner                          0                   0bdec948f92be       csi-hostpathplugin-zmdnt
	eda68815a42f9       922312104da8a       5 minutes ago       Running             liveness-probe                           0                   0bdec948f92be       csi-hostpathplugin-zmdnt
	d650dbd1e711c       08f6b2990811a       5 minutes ago       Running             hostpath                                 0                   0bdec948f92be       csi-hostpathplugin-zmdnt
	ec64d2f546123       0107d56dbc0be       5 minutes ago       Running             node-driver-registrar                    0                   0bdec948f92be       csi-hostpathplugin-zmdnt
	666c7680f78c1       9a80d518f102c       5 minutes ago       Running             csi-attacher                             0                   17267c29f2078       csi-hostpath-attacher-0
	12c1a713b29a6       1505f556b3a7b       5 minutes ago       Running             volcano-controllers                      0                   645af18b4e749       volcano-controllers-59cb4746db-v7g8t
	39d4ea310eac1       d9c7ad4c226bf       5 minutes ago       Running             volcano-scheduler                        0                   980186719c4cc       volcano-scheduler-844f6db89b-4f6t9
	9f58b92ad3a22       487fa743e1e22       5 minutes ago       Running             csi-resizer                              0                   b1ac71acbc9b7       csi-hostpath-resizer-0
	aecce5e72bdad       1461903ec4fe9       5 minutes ago       Running             csi-external-health-monitor-controller   0                   0bdec948f92be       csi-hostpathplugin-zmdnt
	2b3d7dea63de3       7ce2150c8929b       5 minutes ago       Running             local-path-provisioner                   0                   16c10f6b776bf       local-path-provisioner-8d985888d-8s9ds
	f11581b6cc240       4d1e5c3e97420       5 minutes ago       Running             volume-snapshot-controller               0                   8afb07f2fd4b6       snapshot-controller-745499f584-xl2t9
	9f658d4d9a228       296b5f799fcd8       5 minutes ago       Exited              patch                                    0                   9eb85a45f4e97       ingress-nginx-admission-patch-szpgd
	9638cad4d0f88       6fed88f43b276       5 minutes ago       Running             registry                                 0                   b4f0045176a1f       registry-698f998955-d9572
	5a0c260d6e42f       296b5f799fcd8       5 minutes ago       Exited              create                                   0                   40bb7e5e1afef       ingress-nginx-admission-create-rxqns
	a9a61ed26254a       4d1e5c3e97420       5 minutes ago       Running             volume-snapshot-controller               0                   964e19dd80182       snapshot-controller-745499f584-xhrjl
	2515c35e2c7ef       77bdba588b953       5 minutes ago       Running             yakd                                     0                   9762dd0b5ccf8       yakd-dashboard-799879c74f-mr7dg
	365a31efe893f       95dccb4df54ab       5 minutes ago       Running             metrics-server                           0                   82acc6414f26f       metrics-server-c59844bb4-6p6m4
	8fdd62226fc6a       3410e1561990a       5 minutes ago       Running             registry-proxy                           0                   7a63180af3aae       registry-proxy-tc486
	fb3756494ec49       53af6e2c4c343       5 minutes ago       Running             cloud-spanner-emulator                   0                   f9a1338451830       cloud-spanner-emulator-5455fb9b69-56dr7
	95bcfb988da3a       2437cf7621777       5 minutes ago       Running             coredns                                  0                   4c1fdc9ca61b8       coredns-7db6d8ff4d-bxjfr
	6752176a153bc       e396bbd29d2f6       5 minutes ago       Running             nvidia-device-plugin-ctr                 0                   6e809ebf0eb13       nvidia-device-plugin-daemonset-zhsj7
	915e8e3a1856f       35508c2f890c4       5 minutes ago       Running             minikube-ingress-dns                     0                   d802b8610ae7c       kube-ingress-dns-minikube
	ecadf7a9b5c50       ba04bb24b9575       5 minutes ago       Running             storage-provisioner                      0                   84e4a3f2c8d7b       storage-provisioner
	d0d295b02de16       f42786f8afd22       5 minutes ago       Running             kindnet-cni                              0                   44d89d0466cff       kindnet-5nntp
	a95723a1acb3f       2351f570ed0ea       5 minutes ago       Running             kube-proxy                               0                   f8cc49f6582e3       kube-proxy-5n228
	d6d2973118060       d48f992a22722       6 minutes ago       Running             kube-scheduler                           0                   06730b0b93f92       kube-scheduler-addons-369401
	22103fc3f0bef       61773190d42ff       6 minutes ago       Running             kube-apiserver                           0                   9e1d3a62ed90a       kube-apiserver-addons-369401
	20f5c267d9c7b       014faa467e297       6 minutes ago       Running             etcd                                     0                   98a3a2740d686       etcd-addons-369401
	d1681da3c231d       8e97cdb19e7cc       6 minutes ago       Running             kube-controller-manager                  0                   192b22e5b0756       kube-controller-manager-addons-369401
	
	
	==> containerd <==
	Aug 03 22:53:52 addons-369401 containerd[815]: time="2024-08-03T22:53:52.669053974Z" level=info msg="RemoveContainer for \"57cd36bb64728b2c37d14cc95180ffba8c9ec8b4e72f77df518e68126f5fafa5\" returns successfully"
	Aug 03 22:53:52 addons-369401 containerd[815]: time="2024-08-03T22:53:52.671652836Z" level=info msg="StopPodSandbox for \"a7b10f89e2941013ca9a236522b8bcf8b0be7966bc72c6fefd31a222bb241107\""
	Aug 03 22:53:52 addons-369401 containerd[815]: time="2024-08-03T22:53:52.682433358Z" level=info msg="TearDown network for sandbox \"a7b10f89e2941013ca9a236522b8bcf8b0be7966bc72c6fefd31a222bb241107\" successfully"
	Aug 03 22:53:52 addons-369401 containerd[815]: time="2024-08-03T22:53:52.682473104Z" level=info msg="StopPodSandbox for \"a7b10f89e2941013ca9a236522b8bcf8b0be7966bc72c6fefd31a222bb241107\" returns successfully"
	Aug 03 22:53:52 addons-369401 containerd[815]: time="2024-08-03T22:53:52.683052142Z" level=info msg="RemovePodSandbox for \"a7b10f89e2941013ca9a236522b8bcf8b0be7966bc72c6fefd31a222bb241107\""
	Aug 03 22:53:52 addons-369401 containerd[815]: time="2024-08-03T22:53:52.683616000Z" level=info msg="Forcibly stopping sandbox \"a7b10f89e2941013ca9a236522b8bcf8b0be7966bc72c6fefd31a222bb241107\""
	Aug 03 22:53:52 addons-369401 containerd[815]: time="2024-08-03T22:53:52.691609763Z" level=info msg="TearDown network for sandbox \"a7b10f89e2941013ca9a236522b8bcf8b0be7966bc72c6fefd31a222bb241107\" successfully"
	Aug 03 22:53:52 addons-369401 containerd[815]: time="2024-08-03T22:53:52.697872360Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a7b10f89e2941013ca9a236522b8bcf8b0be7966bc72c6fefd31a222bb241107\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus."
	Aug 03 22:53:52 addons-369401 containerd[815]: time="2024-08-03T22:53:52.698006810Z" level=info msg="RemovePodSandbox \"a7b10f89e2941013ca9a236522b8bcf8b0be7966bc72c6fefd31a222bb241107\" returns successfully"
	Aug 03 22:53:52 addons-369401 containerd[815]: time="2024-08-03T22:53:52.698627038Z" level=info msg="StopPodSandbox for \"84386a41ea7ecd6ffe0ddf7e5060f049d057c98f871f6d34e6307633b32c5b23\""
	Aug 03 22:53:52 addons-369401 containerd[815]: time="2024-08-03T22:53:52.706940170Z" level=info msg="TearDown network for sandbox \"84386a41ea7ecd6ffe0ddf7e5060f049d057c98f871f6d34e6307633b32c5b23\" successfully"
	Aug 03 22:53:52 addons-369401 containerd[815]: time="2024-08-03T22:53:52.707150238Z" level=info msg="StopPodSandbox for \"84386a41ea7ecd6ffe0ddf7e5060f049d057c98f871f6d34e6307633b32c5b23\" returns successfully"
	Aug 03 22:53:52 addons-369401 containerd[815]: time="2024-08-03T22:53:52.707890705Z" level=info msg="RemovePodSandbox for \"84386a41ea7ecd6ffe0ddf7e5060f049d057c98f871f6d34e6307633b32c5b23\""
	Aug 03 22:53:52 addons-369401 containerd[815]: time="2024-08-03T22:53:52.707933889Z" level=info msg="Forcibly stopping sandbox \"84386a41ea7ecd6ffe0ddf7e5060f049d057c98f871f6d34e6307633b32c5b23\""
	Aug 03 22:53:52 addons-369401 containerd[815]: time="2024-08-03T22:53:52.720292099Z" level=info msg="TearDown network for sandbox \"84386a41ea7ecd6ffe0ddf7e5060f049d057c98f871f6d34e6307633b32c5b23\" successfully"
	Aug 03 22:53:52 addons-369401 containerd[815]: time="2024-08-03T22:53:52.726378671Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"84386a41ea7ecd6ffe0ddf7e5060f049d057c98f871f6d34e6307633b32c5b23\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus."
	Aug 03 22:53:52 addons-369401 containerd[815]: time="2024-08-03T22:53:52.726492911Z" level=info msg="RemovePodSandbox \"84386a41ea7ecd6ffe0ddf7e5060f049d057c98f871f6d34e6307633b32c5b23\" returns successfully"
	Aug 03 22:53:52 addons-369401 containerd[815]: time="2024-08-03T22:53:52.726971772Z" level=info msg="StopPodSandbox for \"9118c448c519e8f20a2dfe7d8968505c88269acb7b71aa92693134bc72de9901\""
	Aug 03 22:53:52 addons-369401 containerd[815]: time="2024-08-03T22:53:52.734495593Z" level=info msg="TearDown network for sandbox \"9118c448c519e8f20a2dfe7d8968505c88269acb7b71aa92693134bc72de9901\" successfully"
	Aug 03 22:53:52 addons-369401 containerd[815]: time="2024-08-03T22:53:52.734538383Z" level=info msg="StopPodSandbox for \"9118c448c519e8f20a2dfe7d8968505c88269acb7b71aa92693134bc72de9901\" returns successfully"
	Aug 03 22:53:52 addons-369401 containerd[815]: time="2024-08-03T22:53:52.734979271Z" level=info msg="RemovePodSandbox for \"9118c448c519e8f20a2dfe7d8968505c88269acb7b71aa92693134bc72de9901\""
	Aug 03 22:53:52 addons-369401 containerd[815]: time="2024-08-03T22:53:52.735019221Z" level=info msg="Forcibly stopping sandbox \"9118c448c519e8f20a2dfe7d8968505c88269acb7b71aa92693134bc72de9901\""
	Aug 03 22:53:52 addons-369401 containerd[815]: time="2024-08-03T22:53:52.743240103Z" level=info msg="TearDown network for sandbox \"9118c448c519e8f20a2dfe7d8968505c88269acb7b71aa92693134bc72de9901\" successfully"
	Aug 03 22:53:52 addons-369401 containerd[815]: time="2024-08-03T22:53:52.749037352Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"9118c448c519e8f20a2dfe7d8968505c88269acb7b71aa92693134bc72de9901\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus."
	Aug 03 22:53:52 addons-369401 containerd[815]: time="2024-08-03T22:53:52.749152397Z" level=info msg="RemovePodSandbox \"9118c448c519e8f20a2dfe7d8968505c88269acb7b71aa92693134bc72de9901\" returns successfully"
	
	
	==> coredns [95bcfb988da3a697c1b4952a40661123053df22841c6167af35237de61d67431] <==
	[INFO] 10.244.0.4:57722 - 11747 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000047631s
	[INFO] 10.244.0.4:51484 - 6403 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.002110536s
	[INFO] 10.244.0.4:51484 - 3585 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.002185318s
	[INFO] 10.244.0.4:52822 - 4455 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.00009353s
	[INFO] 10.244.0.4:52822 - 46202 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000179341s
	[INFO] 10.244.0.4:51593 - 11775 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.0001248s
	[INFO] 10.244.0.4:51593 - 29179 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.00024056s
	[INFO] 10.244.0.4:44472 - 9938 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000102614s
	[INFO] 10.244.0.4:44472 - 35277 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000074561s
	[INFO] 10.244.0.4:37171 - 5321 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000085178s
	[INFO] 10.244.0.4:37171 - 56519 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000466366s
	[INFO] 10.244.0.4:58009 - 49972 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.00190909s
	[INFO] 10.244.0.4:58009 - 45366 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001862001s
	[INFO] 10.244.0.4:41668 - 38364 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000074757s
	[INFO] 10.244.0.4:41668 - 8414 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000042929s
	[INFO] 10.244.0.24:52427 - 4401 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000155045s
	[INFO] 10.244.0.24:36777 - 48392 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000112944s
	[INFO] 10.244.0.24:44855 - 10407 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000107676s
	[INFO] 10.244.0.24:58360 - 54765 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000104255s
	[INFO] 10.244.0.24:54234 - 53398 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000110212s
	[INFO] 10.244.0.24:35808 - 32906 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000111697s
	[INFO] 10.244.0.24:33032 - 57718 "A IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.002293426s
	[INFO] 10.244.0.24:59508 - 16896 "AAAA IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.002678601s
	[INFO] 10.244.0.24:54428 - 16135 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 648 0.000812221s
	[INFO] 10.244.0.24:47932 - 28628 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.000615855s
	
	
	==> describe nodes <==
	Name:               addons-369401
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-369401
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b1de62d5257af3586cb63b8c779e46d9f9bc0082
	                    minikube.k8s.io/name=addons-369401
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_03T22_49_53_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-369401
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-369401"}
	                    kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 03 Aug 2024 22:49:50 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-369401
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 03 Aug 2024 22:56:01 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 03 Aug 2024 22:52:56 +0000   Sat, 03 Aug 2024 22:49:47 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 03 Aug 2024 22:52:56 +0000   Sat, 03 Aug 2024 22:49:47 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 03 Aug 2024 22:52:56 +0000   Sat, 03 Aug 2024 22:49:47 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 03 Aug 2024 22:52:56 +0000   Sat, 03 Aug 2024 22:50:03 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-369401
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022360Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022360Ki
	  pods:               110
	System Info:
	  Machine ID:                 5fdc6deeab964127a1008d130406a16a
	  System UUID:                6d9b4f4a-7109-4db3-ba68-155267b21769
	  Boot ID:                    7d37f827-388f-4261-892f-42defe929bba
	  Kernel Version:             5.15.0-1066-aws
	  OS Image:                   Ubuntu 22.04.4 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  containerd://1.7.19
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (27 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     cloud-spanner-emulator-5455fb9b69-56dr7      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m57s
	  gadget                      gadget-fwqrb                                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m54s
	  gcp-auth                    gcp-auth-5db96cd9b4-fq9cd                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m25s
	  ingress-nginx               ingress-nginx-controller-6d9bd977d4-vxjwc    100m (5%!)(MISSING)     0 (0%!)(MISSING)      90Mi (1%!)(MISSING)        0 (0%!)(MISSING)         5m52s
	  kube-system                 coredns-7db6d8ff4d-bxjfr                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (2%!)(MISSING)     6m
	  kube-system                 csi-hostpath-attacher-0                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m50s
	  kube-system                 csi-hostpath-resizer-0                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m49s
	  kube-system                 csi-hostpathplugin-zmdnt                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m50s
	  kube-system                 etcd-addons-369401                           100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         6m14s
	  kube-system                 kindnet-5nntp                                100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      6m1s
	  kube-system                 kube-apiserver-addons-369401                 250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m14s
	  kube-system                 kube-controller-manager-addons-369401        200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m15s
	  kube-system                 kube-ingress-dns-minikube                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m56s
	  kube-system                 kube-proxy-5n228                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m1s
	  kube-system                 kube-scheduler-addons-369401                 100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m14s
	  kube-system                 metrics-server-c59844bb4-6p6m4               100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (2%!)(MISSING)       0 (0%!)(MISSING)         5m55s
	  kube-system                 nvidia-device-plugin-daemonset-zhsj7         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m58s
	  kube-system                 registry-698f998955-d9572                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m56s
	  kube-system                 registry-proxy-tc486                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m55s
	  kube-system                 snapshot-controller-745499f584-xhrjl         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m53s
	  kube-system                 snapshot-controller-745499f584-xl2t9         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m53s
	  kube-system                 storage-provisioner                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m56s
	  local-path-storage          local-path-provisioner-8d985888d-8s9ds       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m55s
	  volcano-system              volcano-admission-5f7844f7bc-dr4dc           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m52s
	  volcano-system              volcano-controllers-59cb4746db-v7g8t         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m51s
	  volcano-system              volcano-scheduler-844f6db89b-4f6t9           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m51s
	  yakd-dashboard              yakd-dashboard-799879c74f-mr7dg              0 (0%!)(MISSING)        0 (0%!)(MISSING)      128Mi (1%!)(MISSING)       256Mi (3%!)(MISSING)     5m54s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1050m (52%!)(MISSING)  100m (5%!)(MISSING)
	  memory             638Mi (8%!)(MISSING)   476Mi (6%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 5m59s  kube-proxy       
	  Normal  Starting                 6m21s  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  6m21s  kubelet          Node addons-369401 status is now: NodeHasSufficientMemory
	  Normal  Starting                 6m14s  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  6m14s  kubelet          Node addons-369401 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m14s  kubelet          Node addons-369401 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m14s  kubelet          Node addons-369401 status is now: NodeHasSufficientPID
	  Normal  NodeNotReady             6m14s  kubelet          Node addons-369401 status is now: NodeNotReady
	  Normal  NodeAllocatableEnforced  6m14s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                6m3s   kubelet          Node addons-369401 status is now: NodeReady
	  Normal  RegisteredNode           6m1s   node-controller  Node addons-369401 event: Registered Node addons-369401 in Controller
	
	
	==> dmesg <==
	[  +0.000997] FS-Cache: O-key=[8] 'de3e5c0100000000'
	[  +0.000653] FS-Cache: N-cookie c=00000042 [p=00000039 fl=2 nc=0 na=1]
	[  +0.000845] FS-Cache: N-cookie d=0000000061b9e6aa{9p.inode} n=00000000fdae399f
	[  +0.000941] FS-Cache: N-key=[8] 'de3e5c0100000000'
	[  +0.003093] FS-Cache: Duplicate cookie detected
	[  +0.000624] FS-Cache: O-cookie c=0000003c [p=00000039 fl=226 nc=0 na=1]
	[  +0.000864] FS-Cache: O-cookie d=0000000061b9e6aa{9p.inode} n=00000000d9ee9a63
	[  +0.000984] FS-Cache: O-key=[8] 'de3e5c0100000000'
	[  +0.000633] FS-Cache: N-cookie c=00000043 [p=00000039 fl=2 nc=0 na=1]
	[  +0.000844] FS-Cache: N-cookie d=0000000061b9e6aa{9p.inode} n=0000000031d671ab
	[  +0.000944] FS-Cache: N-key=[8] 'de3e5c0100000000'
	[  +2.705946] FS-Cache: Duplicate cookie detected
	[  +0.000640] FS-Cache: O-cookie c=0000003a [p=00000039 fl=226 nc=0 na=1]
	[  +0.000865] FS-Cache: O-cookie d=0000000061b9e6aa{9p.inode} n=0000000035df286d
	[  +0.000943] FS-Cache: O-key=[8] 'dd3e5c0100000000'
	[  +0.000650] FS-Cache: N-cookie c=00000045 [p=00000039 fl=2 nc=0 na=1]
	[  +0.000888] FS-Cache: N-cookie d=0000000061b9e6aa{9p.inode} n=00000000fdae399f
	[  +0.000952] FS-Cache: N-key=[8] 'dd3e5c0100000000'
	[  +0.262745] FS-Cache: Duplicate cookie detected
	[  +0.000642] FS-Cache: O-cookie c=0000003f [p=00000039 fl=226 nc=0 na=1]
	[  +0.000871] FS-Cache: O-cookie d=0000000061b9e6aa{9p.inode} n=00000000c7b2fe56
	[  +0.000947] FS-Cache: O-key=[8] 'e33e5c0100000000'
	[  +0.000639] FS-Cache: N-cookie c=00000046 [p=00000039 fl=2 nc=0 na=1]
	[  +0.000879] FS-Cache: N-cookie d=0000000061b9e6aa{9p.inode} n=000000002daa5fdd
	[  +0.000948] FS-Cache: N-key=[8] 'e33e5c0100000000'
	
	
	==> etcd [20f5c267d9c7bb6bcc8ba8fd0cb255e3f13501fd9d23da3d811550d744b3cf5e] <==
	{"level":"info","ts":"2024-08-03T22:49:46.342937Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc switched to configuration voters=(12593026477526642892)"}
	{"level":"info","ts":"2024-08-03T22:49:46.343211Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","added-peer-id":"aec36adc501070cc","added-peer-peer-urls":["https://192.168.49.2:2380"]}
	{"level":"info","ts":"2024-08-03T22:49:46.349341Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-08-03T22:49:46.349516Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2024-08-03T22:49:46.349531Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2024-08-03T22:49:46.350587Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"aec36adc501070cc","initial-advertise-peer-urls":["https://192.168.49.2:2380"],"listen-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.49.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-08-03T22:49:46.350708Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-08-03T22:49:46.422283Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc is starting a new election at term 1"}
	{"level":"info","ts":"2024-08-03T22:49:46.422623Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 1"}
	{"level":"info","ts":"2024-08-03T22:49:46.422732Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 1"}
	{"level":"info","ts":"2024-08-03T22:49:46.422892Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 2"}
	{"level":"info","ts":"2024-08-03T22:49:46.422994Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2"}
	{"level":"info","ts":"2024-08-03T22:49:46.423155Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 2"}
	{"level":"info","ts":"2024-08-03T22:49:46.423252Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2"}
	{"level":"info","ts":"2024-08-03T22:49:46.428471Z","caller":"etcdserver/server.go:2578","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-03T22:49:46.428986Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-03T22:49:46.431245Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2024-08-03T22:49:46.428955Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:addons-369401 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2024-08-03T22:49:46.43249Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-03T22:49:46.432786Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-08-03T22:49:46.432816Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-08-03T22:49:46.432916Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-03T22:49:46.432985Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-03T22:49:46.43301Z","caller":"etcdserver/server.go:2602","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-03T22:49:46.457714Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> gcp-auth [19a5bf784361297bad84438860fcafe9782e9238a864a01445b2fcaa4a283f0c] <==
	2024/08/03 22:52:47 GCP Auth Webhook started!
	2024/08/03 22:53:04 Ready to marshal response ...
	2024/08/03 22:53:04 Ready to write response ...
	2024/08/03 22:53:05 Ready to marshal response ...
	2024/08/03 22:53:05 Ready to write response ...
	
	
	==> kernel <==
	 22:56:07 up  7:38,  0 users,  load average: 0.33, 0.87, 0.93
	Linux addons-369401 5.15.0-1066-aws #72~20.04.1-Ubuntu SMP Sat Jul 20 07:44:07 UTC 2024 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.4 LTS"
	
	
	==> kindnet [d0d295b02de16fcd0ffc96fbf8bff3e493a68360df1cece372977fc6befa5d7e] <==
	E0803 22:54:49.861562       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "pods" in API group "" at the cluster scope
	I0803 22:54:50.761655       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0803 22:54:50.761693       1 main.go:299] handling current node
	I0803 22:55:00.762372       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0803 22:55:00.762407       1 main.go:299] handling current node
	I0803 22:55:10.762147       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0803 22:55:10.762182       1 main.go:299] handling current node
	W0803 22:55:19.525522       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "namespaces" in API group "" at the cluster scope
	E0803 22:55:19.525556       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "namespaces" in API group "" at the cluster scope
	I0803 22:55:20.761830       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0803 22:55:20.761865       1 main.go:299] handling current node
	W0803 22:55:21.130035       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: failed to list *v1.NetworkPolicy: networkpolicies.networking.k8s.io is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "networkpolicies" in API group "networking.k8s.io" at the cluster scope
	E0803 22:55:21.130075       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: Failed to watch *v1.NetworkPolicy: failed to list *v1.NetworkPolicy: networkpolicies.networking.k8s.io is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "networkpolicies" in API group "networking.k8s.io" at the cluster scope
	I0803 22:55:30.762605       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0803 22:55:30.762639       1 main.go:299] handling current node
	W0803 22:55:38.112079       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: failed to list *v1.Pod: pods is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "pods" in API group "" at the cluster scope
	E0803 22:55:38.112113       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "pods" in API group "" at the cluster scope
	I0803 22:55:40.762655       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0803 22:55:40.762698       1 main.go:299] handling current node
	I0803 22:55:50.762097       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0803 22:55:50.762135       1 main.go:299] handling current node
	I0803 22:56:00.762341       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0803 22:56:00.762377       1 main.go:299] handling current node
	W0803 22:56:06.860176       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: failed to list *v1.NetworkPolicy: networkpolicies.networking.k8s.io is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "networkpolicies" in API group "networking.k8s.io" at the cluster scope
	E0803 22:56:06.860215       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: Failed to watch *v1.NetworkPolicy: failed to list *v1.NetworkPolicy: networkpolicies.networking.k8s.io is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "networkpolicies" in API group "networking.k8s.io" at the cluster scope
	
	
	==> kube-apiserver [22103fc3f0bef9706c278484350e4ba964114f7f0798f8f1cab92266b6c912c5] <==
	W0803 22:51:19.115141       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.99.80.118:443: connect: connection refused
	W0803 22:51:20.217386       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.99.80.118:443: connect: connection refused
	W0803 22:51:21.087634       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.105.164.71:443: connect: connection refused
	E0803 22:51:21.087677       1 dispatcher.go:214] failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.105.164.71:443: connect: connection refused
	W0803 22:51:21.088093       1 dispatcher.go:225] Failed calling webhook, failing closed mutatepod.volcano.sh: failed calling webhook "mutatepod.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/pods/mutate?timeout=10s": dial tcp 10.99.80.118:443: connect: connection refused
	W0803 22:51:21.139285       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.105.164.71:443: connect: connection refused
	E0803 22:51:21.139322       1 dispatcher.go:214] failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.105.164.71:443: connect: connection refused
	W0803 22:51:21.139755       1 dispatcher.go:225] Failed calling webhook, failing closed mutatepod.volcano.sh: failed calling webhook "mutatepod.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/pods/mutate?timeout=10s": dial tcp 10.99.80.118:443: connect: connection refused
	W0803 22:51:21.319930       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.99.80.118:443: connect: connection refused
	W0803 22:51:22.359553       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.99.80.118:443: connect: connection refused
	W0803 22:51:23.454611       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.99.80.118:443: connect: connection refused
	W0803 22:51:24.483403       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.99.80.118:443: connect: connection refused
	W0803 22:51:25.505005       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.99.80.118:443: connect: connection refused
	W0803 22:51:26.523582       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.99.80.118:443: connect: connection refused
	W0803 22:51:27.588842       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.99.80.118:443: connect: connection refused
	W0803 22:51:28.633861       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.99.80.118:443: connect: connection refused
	W0803 22:51:29.734761       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.99.80.118:443: connect: connection refused
	W0803 22:51:41.103655       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.105.164.71:443: connect: connection refused
	E0803 22:51:41.103694       1 dispatcher.go:214] failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.105.164.71:443: connect: connection refused
	W0803 22:52:21.095028       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.105.164.71:443: connect: connection refused
	E0803 22:52:21.095071       1 dispatcher.go:214] failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.105.164.71:443: connect: connection refused
	W0803 22:52:21.145202       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.105.164.71:443: connect: connection refused
	E0803 22:52:21.145237       1 dispatcher.go:214] failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.105.164.71:443: connect: connection refused
	I0803 22:53:04.744059       1 controller.go:615] quota admission added evaluator for: jobs.batch.volcano.sh
	I0803 22:53:04.788848       1 controller.go:615] quota admission added evaluator for: podgroups.scheduling.volcano.sh
	
	
	==> kube-controller-manager [d1681da3c231d8128a927871f1d268ae602446c5ae1280682f5059244479f3fe] <==
	I0803 22:52:21.115921       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create"
	I0803 22:52:21.129548       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create"
	I0803 22:52:21.154178       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch"
	I0803 22:52:21.165748       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch"
	I0803 22:52:21.170438       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch"
	I0803 22:52:21.180800       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch"
	I0803 22:52:22.400082       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch"
	I0803 22:52:22.413168       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create"
	I0803 22:52:23.507827       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create"
	I0803 22:52:23.529113       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch"
	I0803 22:52:24.414344       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create"
	I0803 22:52:24.435341       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch"
	I0803 22:52:24.515013       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create"
	I0803 22:52:24.527150       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create"
	I0803 22:52:24.537690       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch"
	I0803 22:52:24.539160       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create"
	I0803 22:52:24.547113       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch"
	I0803 22:52:24.553870       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch"
	I0803 22:52:47.497654       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="gcp-auth/gcp-auth-5db96cd9b4" duration="10.765719ms"
	I0803 22:52:47.498622       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="gcp-auth/gcp-auth-5db96cd9b4" duration="34.462µs"
	I0803 22:52:54.037563       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create"
	I0803 22:52:54.041055       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch"
	I0803 22:52:54.104338       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch"
	I0803 22:52:54.105786       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create"
	I0803 22:53:04.446090       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="volcano-system/volcano-admission-init"
	
	
	==> kube-proxy [a95723a1acb3f24cc582ca5d74486f30a473043186dffe31664acb73cff4f1b7] <==
	I0803 22:50:07.536685       1 server_linux.go:69] "Using iptables proxy"
	I0803 22:50:07.599969       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	I0803 22:50:07.642720       1 server.go:659] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0803 22:50:07.642779       1 server_linux.go:165] "Using iptables Proxier"
	I0803 22:50:07.646775       1 server_linux.go:511] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I0803 22:50:07.646818       1 server_linux.go:528] "Defaulting to no-op detect-local"
	I0803 22:50:07.646846       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0803 22:50:07.647075       1 server.go:872] "Version info" version="v1.30.3"
	I0803 22:50:07.647095       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0803 22:50:07.648485       1 config.go:192] "Starting service config controller"
	I0803 22:50:07.648507       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0803 22:50:07.648550       1 config.go:101] "Starting endpoint slice config controller"
	I0803 22:50:07.648555       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0803 22:50:07.649313       1 config.go:319] "Starting node config controller"
	I0803 22:50:07.649340       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0803 22:50:07.749365       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0803 22:50:07.749447       1 shared_informer.go:320] Caches are synced for service config
	I0803 22:50:07.749695       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [d6d29731180605e1fe28433491ddeade9d3c80acae4c3ea1f9b64d824414d779] <==
	W0803 22:49:50.046219       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0803 22:49:50.046236       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0803 22:49:50.046371       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0803 22:49:50.046391       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0803 22:49:50.046474       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0803 22:49:50.046504       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0803 22:49:50.046519       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0803 22:49:50.046536       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0803 22:49:50.046769       1 reflector.go:547] runtime/asm_arm64.s:1222: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0803 22:49:50.046789       1 reflector.go:150] runtime/asm_arm64.s:1222: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0803 22:49:50.869861       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0803 22:49:50.869907       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0803 22:49:50.878140       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0803 22:49:50.878354       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0803 22:49:50.934171       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0803 22:49:50.935168       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0803 22:49:50.935508       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0803 22:49:50.935533       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0803 22:49:51.049264       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0803 22:49:51.049489       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0803 22:49:51.123655       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0803 22:49:51.123867       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0803 22:49:51.230713       1 reflector.go:547] runtime/asm_arm64.s:1222: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0803 22:49:51.230904       1 reflector.go:150] runtime/asm_arm64.s:1222: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0803 22:49:54.135774       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Aug 03 22:54:13 addons-369401 kubelet[1549]: E0803 22:54:13.588914    1549 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=gadget pod=gadget-fwqrb_gadget(b098e080-bfc9-416c-94a9-5f7c27c62ada)\"" pod="gadget/gadget-fwqrb" podUID="b098e080-bfc9-416c-94a9-5f7c27c62ada"
	Aug 03 22:54:18 addons-369401 kubelet[1549]: I0803 22:54:18.588108    1549 kubelet_pods.go:988] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/coredns-7db6d8ff4d-bxjfr" secret="" err="secret \"gcp-auth\" not found"
	Aug 03 22:54:20 addons-369401 kubelet[1549]: I0803 22:54:20.588556    1549 kubelet_pods.go:988] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/nvidia-device-plugin-daemonset-zhsj7" secret="" err="secret \"gcp-auth\" not found"
	Aug 03 22:54:27 addons-369401 kubelet[1549]: I0803 22:54:27.588091    1549 scope.go:117] "RemoveContainer" containerID="fb7d90dfa5f1d37d8dbc2d7e4789dbfbf3c87b20043f384e77a50705997d55e2"
	Aug 03 22:54:27 addons-369401 kubelet[1549]: E0803 22:54:27.588595    1549 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=gadget pod=gadget-fwqrb_gadget(b098e080-bfc9-416c-94a9-5f7c27c62ada)\"" pod="gadget/gadget-fwqrb" podUID="b098e080-bfc9-416c-94a9-5f7c27c62ada"
	Aug 03 22:54:28 addons-369401 kubelet[1549]: I0803 22:54:28.588367    1549 kubelet_pods.go:988] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-tc486" secret="" err="secret \"gcp-auth\" not found"
	Aug 03 22:54:34 addons-369401 kubelet[1549]: I0803 22:54:34.588594    1549 kubelet_pods.go:988] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-698f998955-d9572" secret="" err="secret \"gcp-auth\" not found"
	Aug 03 22:54:40 addons-369401 kubelet[1549]: I0803 22:54:40.588806    1549 scope.go:117] "RemoveContainer" containerID="fb7d90dfa5f1d37d8dbc2d7e4789dbfbf3c87b20043f384e77a50705997d55e2"
	Aug 03 22:54:40 addons-369401 kubelet[1549]: E0803 22:54:40.589301    1549 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=gadget pod=gadget-fwqrb_gadget(b098e080-bfc9-416c-94a9-5f7c27c62ada)\"" pod="gadget/gadget-fwqrb" podUID="b098e080-bfc9-416c-94a9-5f7c27c62ada"
	Aug 03 22:54:51 addons-369401 kubelet[1549]: I0803 22:54:51.588336    1549 scope.go:117] "RemoveContainer" containerID="fb7d90dfa5f1d37d8dbc2d7e4789dbfbf3c87b20043f384e77a50705997d55e2"
	Aug 03 22:54:51 addons-369401 kubelet[1549]: E0803 22:54:51.589381    1549 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=gadget pod=gadget-fwqrb_gadget(b098e080-bfc9-416c-94a9-5f7c27c62ada)\"" pod="gadget/gadget-fwqrb" podUID="b098e080-bfc9-416c-94a9-5f7c27c62ada"
	Aug 03 22:55:05 addons-369401 kubelet[1549]: I0803 22:55:05.588241    1549 scope.go:117] "RemoveContainer" containerID="fb7d90dfa5f1d37d8dbc2d7e4789dbfbf3c87b20043f384e77a50705997d55e2"
	Aug 03 22:55:05 addons-369401 kubelet[1549]: E0803 22:55:05.588807    1549 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=gadget pod=gadget-fwqrb_gadget(b098e080-bfc9-416c-94a9-5f7c27c62ada)\"" pod="gadget/gadget-fwqrb" podUID="b098e080-bfc9-416c-94a9-5f7c27c62ada"
	Aug 03 22:55:17 addons-369401 kubelet[1549]: I0803 22:55:17.588497    1549 scope.go:117] "RemoveContainer" containerID="fb7d90dfa5f1d37d8dbc2d7e4789dbfbf3c87b20043f384e77a50705997d55e2"
	Aug 03 22:55:17 addons-369401 kubelet[1549]: E0803 22:55:17.589522    1549 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=gadget pod=gadget-fwqrb_gadget(b098e080-bfc9-416c-94a9-5f7c27c62ada)\"" pod="gadget/gadget-fwqrb" podUID="b098e080-bfc9-416c-94a9-5f7c27c62ada"
	Aug 03 22:55:26 addons-369401 kubelet[1549]: I0803 22:55:26.588767    1549 kubelet_pods.go:988] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/nvidia-device-plugin-daemonset-zhsj7" secret="" err="secret \"gcp-auth\" not found"
	Aug 03 22:55:30 addons-369401 kubelet[1549]: I0803 22:55:30.589292    1549 scope.go:117] "RemoveContainer" containerID="fb7d90dfa5f1d37d8dbc2d7e4789dbfbf3c87b20043f384e77a50705997d55e2"
	Aug 03 22:55:30 addons-369401 kubelet[1549]: E0803 22:55:30.590226    1549 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=gadget pod=gadget-fwqrb_gadget(b098e080-bfc9-416c-94a9-5f7c27c62ada)\"" pod="gadget/gadget-fwqrb" podUID="b098e080-bfc9-416c-94a9-5f7c27c62ada"
	Aug 03 22:55:35 addons-369401 kubelet[1549]: I0803 22:55:35.588097    1549 kubelet_pods.go:988] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-698f998955-d9572" secret="" err="secret \"gcp-auth\" not found"
	Aug 03 22:55:39 addons-369401 kubelet[1549]: I0803 22:55:39.588169    1549 kubelet_pods.go:988] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/coredns-7db6d8ff4d-bxjfr" secret="" err="secret \"gcp-auth\" not found"
	Aug 03 22:55:43 addons-369401 kubelet[1549]: I0803 22:55:43.588462    1549 scope.go:117] "RemoveContainer" containerID="fb7d90dfa5f1d37d8dbc2d7e4789dbfbf3c87b20043f384e77a50705997d55e2"
	Aug 03 22:55:43 addons-369401 kubelet[1549]: E0803 22:55:43.589015    1549 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=gadget pod=gadget-fwqrb_gadget(b098e080-bfc9-416c-94a9-5f7c27c62ada)\"" pod="gadget/gadget-fwqrb" podUID="b098e080-bfc9-416c-94a9-5f7c27c62ada"
	Aug 03 22:55:54 addons-369401 kubelet[1549]: I0803 22:55:54.588614    1549 kubelet_pods.go:988] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-tc486" secret="" err="secret \"gcp-auth\" not found"
	Aug 03 22:55:55 addons-369401 kubelet[1549]: I0803 22:55:55.588453    1549 scope.go:117] "RemoveContainer" containerID="fb7d90dfa5f1d37d8dbc2d7e4789dbfbf3c87b20043f384e77a50705997d55e2"
	Aug 03 22:55:55 addons-369401 kubelet[1549]: E0803 22:55:55.588986    1549 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=gadget pod=gadget-fwqrb_gadget(b098e080-bfc9-416c-94a9-5f7c27c62ada)\"" pod="gadget/gadget-fwqrb" podUID="b098e080-bfc9-416c-94a9-5f7c27c62ada"
	
	
	==> storage-provisioner [ecadf7a9b5c5074da0a39cbbd39a1addec8b2e99ad1faf7338c3b072cb1e0482] <==
	I0803 22:50:11.366679       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0803 22:50:11.388846       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0803 22:50:11.394912       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0803 22:50:11.418403       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0803 22:50:11.425146       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-369401_ed11642a-5107-41ed-b4c7-c2b3a7773673!
	I0803 22:50:11.426218       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"5a3db354-6434-44e9-956a-a0969da385d8", APIVersion:"v1", ResourceVersion:"509", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-369401_ed11642a-5107-41ed-b4c7-c2b3a7773673 became leader
	I0803 22:50:11.525546       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-369401_ed11642a-5107-41ed-b4c7-c2b3a7773673!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-369401 -n addons-369401
helpers_test.go:261: (dbg) Run:  kubectl --context addons-369401 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: ingress-nginx-admission-create-rxqns ingress-nginx-admission-patch-szpgd test-job-nginx-0
helpers_test.go:274: ======> post-mortem[TestAddons/serial/Volcano]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-369401 describe pod ingress-nginx-admission-create-rxqns ingress-nginx-admission-patch-szpgd test-job-nginx-0
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-369401 describe pod ingress-nginx-admission-create-rxqns ingress-nginx-admission-patch-szpgd test-job-nginx-0: exit status 1 (124.789245ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-rxqns" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-szpgd" not found
	Error from server (NotFound): pods "test-job-nginx-0" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context addons-369401 describe pod ingress-nginx-admission-create-rxqns ingress-nginx-admission-patch-szpgd test-job-nginx-0: exit status 1
--- FAIL: TestAddons/serial/Volcano (199.93s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (379.53s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-820414 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.20.0
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p old-k8s-version-820414 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.20.0: exit status 102 (6m16.160443961s)

                                                
                                                
-- stdout --
	* [old-k8s-version-820414] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19364
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19364-1180294/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19364-1180294/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.30.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.3
	* Using the docker driver based on existing profile
	* Starting "old-k8s-version-820414" primary control-plane node in "old-k8s-version-820414" cluster
	* Pulling base image v0.0.44-1721902582-19326 ...
	* Restarting existing docker container for "old-k8s-version-820414" ...
	* Preparing Kubernetes v1.20.0 on containerd 1.7.19 ...
	* Verifying Kubernetes components...
	  - Using image docker.io/kubernetesui/dashboard:v2.7.0
	  - Using image registry.k8s.io/echoserver:1.4
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	  - Using image fake.domain/registry.k8s.io/echoserver:1.4
	* Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p old-k8s-version-820414 addons enable metrics-server
	
	* Enabled addons: metrics-server, storage-provisioner, default-storageclass, dashboard
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0803 23:39:14.147491 1389119 out.go:291] Setting OutFile to fd 1 ...
	I0803 23:39:14.147636 1389119 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 23:39:14.147647 1389119 out.go:304] Setting ErrFile to fd 2...
	I0803 23:39:14.147653 1389119 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 23:39:14.147998 1389119 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19364-1180294/.minikube/bin
	I0803 23:39:14.148430 1389119 out.go:298] Setting JSON to false
	I0803 23:39:14.149516 1389119 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":30100,"bootTime":1722698255,"procs":209,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0803 23:39:14.149616 1389119 start.go:139] virtualization:  
	I0803 23:39:14.152711 1389119 out.go:177] * [old-k8s-version-820414] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	I0803 23:39:14.155215 1389119 out.go:177]   - MINIKUBE_LOCATION=19364
	I0803 23:39:14.155516 1389119 notify.go:220] Checking for updates...
	I0803 23:39:14.159562 1389119 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0803 23:39:14.161757 1389119 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19364-1180294/kubeconfig
	I0803 23:39:14.163888 1389119 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19364-1180294/.minikube
	I0803 23:39:14.165684 1389119 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0803 23:39:14.167888 1389119 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0803 23:39:14.170160 1389119 config.go:182] Loaded profile config "old-k8s-version-820414": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
	I0803 23:39:14.173054 1389119 out.go:177] * Kubernetes 1.30.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.3
	I0803 23:39:14.175100 1389119 driver.go:392] Setting default libvirt URI to qemu:///system
	I0803 23:39:14.197810 1389119 docker.go:123] docker version: linux-27.1.1:Docker Engine - Community
	I0803 23:39:14.197926 1389119 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0803 23:39:14.263429 1389119 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:39 OomKillDisable:true NGoroutines:51 SystemTime:2024-08-03 23:39:14.253633743 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1066-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214896640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41 Expected:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1]] Warnings:<nil>}}
	I0803 23:39:14.263543 1389119 docker.go:307] overlay module found
	I0803 23:39:14.266105 1389119 out.go:177] * Using the docker driver based on existing profile
	I0803 23:39:14.267922 1389119 start.go:297] selected driver: docker
	I0803 23:39:14.267940 1389119 start.go:901] validating driver "docker" against &{Name:old-k8s-version-820414 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-820414 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0803 23:39:14.268045 1389119 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0803 23:39:14.268699 1389119 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0803 23:39:14.326873 1389119 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:39 OomKillDisable:true NGoroutines:51 SystemTime:2024-08-03 23:39:14.317409427 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1066-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214896640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41 Expected:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1]] Warnings:<nil>}}
	I0803 23:39:14.327278 1389119 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0803 23:39:14.327310 1389119 cni.go:84] Creating CNI manager for ""
	I0803 23:39:14.327318 1389119 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0803 23:39:14.327371 1389119 start.go:340] cluster config:
	{Name:old-k8s-version-820414 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-820414 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:do
cker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0803 23:39:14.329794 1389119 out.go:177] * Starting "old-k8s-version-820414" primary control-plane node in "old-k8s-version-820414" cluster
	I0803 23:39:14.331520 1389119 cache.go:121] Beginning downloading kic base image for docker with containerd
	I0803 23:39:14.333883 1389119 out.go:177] * Pulling base image v0.0.44-1721902582-19326 ...
	I0803 23:39:14.335569 1389119 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime containerd
	I0803 23:39:14.335624 1389119 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19364-1180294/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4
	I0803 23:39:14.335636 1389119 cache.go:56] Caching tarball of preloaded images
	I0803 23:39:14.335644 1389119 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 in local docker daemon
	I0803 23:39:14.335713 1389119 preload.go:172] Found /home/jenkins/minikube-integration/19364-1180294/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I0803 23:39:14.335723 1389119 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on containerd
	I0803 23:39:14.335865 1389119 profile.go:143] Saving config to /home/jenkins/minikube-integration/19364-1180294/.minikube/profiles/old-k8s-version-820414/config.json ...
	W0803 23:39:14.354758 1389119 image.go:95] image gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 is of wrong architecture
	I0803 23:39:14.354787 1389119 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 to local cache
	I0803 23:39:14.354857 1389119 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 in local cache directory
	I0803 23:39:14.354881 1389119 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 in local cache directory, skipping pull
	I0803 23:39:14.354886 1389119 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 exists in cache, skipping pull
	I0803 23:39:14.354895 1389119 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 as a tarball
	I0803 23:39:14.354903 1389119 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 from local cache
	I0803 23:39:14.476368 1389119 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 from cached tarball
	I0803 23:39:14.476423 1389119 cache.go:194] Successfully downloaded all kic artifacts
	I0803 23:39:14.476456 1389119 start.go:360] acquireMachinesLock for old-k8s-version-820414: {Name:mkcac03898fa5abd86892a51d5a90af3645fb5f5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0803 23:39:14.476524 1389119 start.go:364] duration metric: took 38.761µs to acquireMachinesLock for "old-k8s-version-820414"
	I0803 23:39:14.476548 1389119 start.go:96] Skipping create...Using existing machine configuration
	I0803 23:39:14.476562 1389119 fix.go:54] fixHost starting: 
	I0803 23:39:14.476862 1389119 cli_runner.go:164] Run: docker container inspect old-k8s-version-820414 --format={{.State.Status}}
	I0803 23:39:14.492893 1389119 fix.go:112] recreateIfNeeded on old-k8s-version-820414: state=Stopped err=<nil>
	W0803 23:39:14.492925 1389119 fix.go:138] unexpected machine state, will restart: <nil>
	I0803 23:39:14.496321 1389119 out.go:177] * Restarting existing docker container for "old-k8s-version-820414" ...
	I0803 23:39:14.497913 1389119 cli_runner.go:164] Run: docker start old-k8s-version-820414
	I0803 23:39:14.797610 1389119 cli_runner.go:164] Run: docker container inspect old-k8s-version-820414 --format={{.State.Status}}
	I0803 23:39:14.818397 1389119 kic.go:430] container "old-k8s-version-820414" state is running.
	I0803 23:39:14.821084 1389119 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-820414
	I0803 23:39:14.845743 1389119 profile.go:143] Saving config to /home/jenkins/minikube-integration/19364-1180294/.minikube/profiles/old-k8s-version-820414/config.json ...
	I0803 23:39:14.845962 1389119 machine.go:94] provisionDockerMachine start ...
	I0803 23:39:14.846018 1389119 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-820414
	I0803 23:39:14.867314 1389119 main.go:141] libmachine: Using SSH client type: native
	I0803 23:39:14.867595 1389119 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e2cd0] 0x3e5530 <nil>  [] 0s} 127.0.0.1 34543 <nil> <nil>}
	I0803 23:39:14.867604 1389119 main.go:141] libmachine: About to run SSH command:
	hostname
	I0803 23:39:14.868268 1389119 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:33922->127.0.0.1:34543: read: connection reset by peer
	I0803 23:39:18.012238 1389119 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-820414
	
	I0803 23:39:18.012335 1389119 ubuntu.go:169] provisioning hostname "old-k8s-version-820414"
	I0803 23:39:18.012462 1389119 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-820414
	I0803 23:39:18.037572 1389119 main.go:141] libmachine: Using SSH client type: native
	I0803 23:39:18.037836 1389119 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e2cd0] 0x3e5530 <nil>  [] 0s} 127.0.0.1 34543 <nil> <nil>}
	I0803 23:39:18.037855 1389119 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-820414 && echo "old-k8s-version-820414" | sudo tee /etc/hostname
	I0803 23:39:18.186874 1389119 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-820414
	
	I0803 23:39:18.187000 1389119 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-820414
	I0803 23:39:18.204803 1389119 main.go:141] libmachine: Using SSH client type: native
	I0803 23:39:18.205064 1389119 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e2cd0] 0x3e5530 <nil>  [] 0s} 127.0.0.1 34543 <nil> <nil>}
	I0803 23:39:18.205089 1389119 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-820414' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-820414/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-820414' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0803 23:39:18.337042 1389119 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0803 23:39:18.337081 1389119 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19364-1180294/.minikube CaCertPath:/home/jenkins/minikube-integration/19364-1180294/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19364-1180294/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19364-1180294/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19364-1180294/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19364-1180294/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19364-1180294/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19364-1180294/.minikube}
	I0803 23:39:18.337111 1389119 ubuntu.go:177] setting up certificates
	I0803 23:39:18.337120 1389119 provision.go:84] configureAuth start
	I0803 23:39:18.337179 1389119 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-820414
	I0803 23:39:18.355665 1389119 provision.go:143] copyHostCerts
	I0803 23:39:18.355748 1389119 exec_runner.go:144] found /home/jenkins/minikube-integration/19364-1180294/.minikube/ca.pem, removing ...
	I0803 23:39:18.355764 1389119 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19364-1180294/.minikube/ca.pem
	I0803 23:39:18.355842 1389119 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19364-1180294/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19364-1180294/.minikube/ca.pem (1078 bytes)
	I0803 23:39:18.355944 1389119 exec_runner.go:144] found /home/jenkins/minikube-integration/19364-1180294/.minikube/cert.pem, removing ...
	I0803 23:39:18.355953 1389119 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19364-1180294/.minikube/cert.pem
	I0803 23:39:18.355980 1389119 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19364-1180294/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19364-1180294/.minikube/cert.pem (1123 bytes)
	I0803 23:39:18.356039 1389119 exec_runner.go:144] found /home/jenkins/minikube-integration/19364-1180294/.minikube/key.pem, removing ...
	I0803 23:39:18.356049 1389119 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19364-1180294/.minikube/key.pem
	I0803 23:39:18.356076 1389119 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19364-1180294/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19364-1180294/.minikube/key.pem (1675 bytes)
	I0803 23:39:18.356129 1389119 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19364-1180294/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19364-1180294/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19364-1180294/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-820414 san=[127.0.0.1 192.168.76.2 localhost minikube old-k8s-version-820414]
	I0803 23:39:18.743745 1389119 provision.go:177] copyRemoteCerts
	I0803 23:39:18.743816 1389119 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0803 23:39:18.743862 1389119 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-820414
	I0803 23:39:18.764483 1389119 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34543 SSHKeyPath:/home/jenkins/minikube-integration/19364-1180294/.minikube/machines/old-k8s-version-820414/id_rsa Username:docker}
	I0803 23:39:18.866454 1389119 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-1180294/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0803 23:39:18.893471 1389119 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-1180294/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0803 23:39:18.917934 1389119 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-1180294/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0803 23:39:18.942528 1389119 provision.go:87] duration metric: took 605.394262ms to configureAuth
	I0803 23:39:18.942558 1389119 ubuntu.go:193] setting minikube options for container-runtime
	I0803 23:39:18.942763 1389119 config.go:182] Loaded profile config "old-k8s-version-820414": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
	I0803 23:39:18.942777 1389119 machine.go:97] duration metric: took 4.096804379s to provisionDockerMachine
	I0803 23:39:18.942790 1389119 start.go:293] postStartSetup for "old-k8s-version-820414" (driver="docker")
	I0803 23:39:18.942807 1389119 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0803 23:39:18.942856 1389119 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0803 23:39:18.942901 1389119 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-820414
	I0803 23:39:18.959330 1389119 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34543 SSHKeyPath:/home/jenkins/minikube-integration/19364-1180294/.minikube/machines/old-k8s-version-820414/id_rsa Username:docker}
	I0803 23:39:19.054051 1389119 ssh_runner.go:195] Run: cat /etc/os-release
	I0803 23:39:19.057222 1389119 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0803 23:39:19.057255 1389119 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0803 23:39:19.057265 1389119 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0803 23:39:19.057272 1389119 info.go:137] Remote host: Ubuntu 22.04.4 LTS
	I0803 23:39:19.057282 1389119 filesync.go:126] Scanning /home/jenkins/minikube-integration/19364-1180294/.minikube/addons for local assets ...
	I0803 23:39:19.057334 1389119 filesync.go:126] Scanning /home/jenkins/minikube-integration/19364-1180294/.minikube/files for local assets ...
	I0803 23:39:19.057424 1389119 filesync.go:149] local asset: /home/jenkins/minikube-integration/19364-1180294/.minikube/files/etc/ssl/certs/11857022.pem -> 11857022.pem in /etc/ssl/certs
	I0803 23:39:19.057527 1389119 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0803 23:39:19.065930 1389119 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-1180294/.minikube/files/etc/ssl/certs/11857022.pem --> /etc/ssl/certs/11857022.pem (1708 bytes)
	I0803 23:39:19.090870 1389119 start.go:296] duration metric: took 148.057046ms for postStartSetup
	I0803 23:39:19.090976 1389119 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0803 23:39:19.091030 1389119 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-820414
	I0803 23:39:19.108276 1389119 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34543 SSHKeyPath:/home/jenkins/minikube-integration/19364-1180294/.minikube/machines/old-k8s-version-820414/id_rsa Username:docker}
	I0803 23:39:19.202327 1389119 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0803 23:39:19.206916 1389119 fix.go:56] duration metric: took 4.730347593s for fixHost
	I0803 23:39:19.206942 1389119 start.go:83] releasing machines lock for "old-k8s-version-820414", held for 4.73040607s
	I0803 23:39:19.207010 1389119 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-820414
	I0803 23:39:19.223322 1389119 ssh_runner.go:195] Run: cat /version.json
	I0803 23:39:19.223376 1389119 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-820414
	I0803 23:39:19.223404 1389119 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0803 23:39:19.223465 1389119 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-820414
	I0803 23:39:19.241521 1389119 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34543 SSHKeyPath:/home/jenkins/minikube-integration/19364-1180294/.minikube/machines/old-k8s-version-820414/id_rsa Username:docker}
	I0803 23:39:19.247370 1389119 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34543 SSHKeyPath:/home/jenkins/minikube-integration/19364-1180294/.minikube/machines/old-k8s-version-820414/id_rsa Username:docker}
	I0803 23:39:19.481412 1389119 ssh_runner.go:195] Run: systemctl --version
	I0803 23:39:19.485853 1389119 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0803 23:39:19.490137 1389119 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0803 23:39:19.509010 1389119 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0803 23:39:19.509131 1389119 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0803 23:39:19.518575 1389119 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0803 23:39:19.518644 1389119 start.go:495] detecting cgroup driver to use...
	I0803 23:39:19.518695 1389119 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0803 23:39:19.518784 1389119 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0803 23:39:19.533162 1389119 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0803 23:39:19.545197 1389119 docker.go:217] disabling cri-docker service (if available) ...
	I0803 23:39:19.545260 1389119 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0803 23:39:19.558038 1389119 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0803 23:39:19.569426 1389119 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0803 23:39:19.653764 1389119 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0803 23:39:19.746906 1389119 docker.go:233] disabling docker service ...
	I0803 23:39:19.746985 1389119 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0803 23:39:19.759714 1389119 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0803 23:39:19.771417 1389119 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0803 23:39:19.865985 1389119 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0803 23:39:19.957979 1389119 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0803 23:39:19.969830 1389119 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0803 23:39:19.988065 1389119 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.2"|' /etc/containerd/config.toml"
	I0803 23:39:19.997979 1389119 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0803 23:39:20.015273 1389119 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0803 23:39:20.015414 1389119 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0803 23:39:20.030335 1389119 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0803 23:39:20.042065 1389119 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0803 23:39:20.053440 1389119 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0803 23:39:20.063856 1389119 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0803 23:39:20.074630 1389119 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0803 23:39:20.085871 1389119 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0803 23:39:20.097319 1389119 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0803 23:39:20.107474 1389119 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0803 23:39:20.208166 1389119 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0803 23:39:20.391272 1389119 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I0803 23:39:20.391426 1389119 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0803 23:39:20.395297 1389119 start.go:563] Will wait 60s for crictl version
	I0803 23:39:20.395430 1389119 ssh_runner.go:195] Run: which crictl
	I0803 23:39:20.399280 1389119 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0803 23:39:20.439747 1389119 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.7.19
	RuntimeApiVersion:  v1
	I0803 23:39:20.439866 1389119 ssh_runner.go:195] Run: containerd --version
	I0803 23:39:20.474292 1389119 ssh_runner.go:195] Run: containerd --version
	I0803 23:39:20.502706 1389119 out.go:177] * Preparing Kubernetes v1.20.0 on containerd 1.7.19 ...
	I0803 23:39:20.504614 1389119 cli_runner.go:164] Run: docker network inspect old-k8s-version-820414 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0803 23:39:20.520277 1389119 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I0803 23:39:20.524265 1389119 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0803 23:39:20.535119 1389119 kubeadm.go:883] updating cluster {Name:old-k8s-version-820414 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-820414 Namespace:default APIServerHAVIP: APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/
home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0803 23:39:20.535252 1389119 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime containerd
	I0803 23:39:20.535314 1389119 ssh_runner.go:195] Run: sudo crictl images --output json
	I0803 23:39:20.572850 1389119 containerd.go:627] all images are preloaded for containerd runtime.
	I0803 23:39:20.572876 1389119 containerd.go:534] Images already preloaded, skipping extraction
	I0803 23:39:20.572935 1389119 ssh_runner.go:195] Run: sudo crictl images --output json
	I0803 23:39:20.611313 1389119 containerd.go:627] all images are preloaded for containerd runtime.
	I0803 23:39:20.611338 1389119 cache_images.go:84] Images are preloaded, skipping loading
	I0803 23:39:20.611346 1389119 kubeadm.go:934] updating node { 192.168.76.2 8443 v1.20.0 containerd true true} ...
	I0803 23:39:20.611468 1389119 kubeadm.go:946] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=old-k8s-version-820414 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-820414 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0803 23:39:20.611536 1389119 ssh_runner.go:195] Run: sudo crictl info
	I0803 23:39:20.652302 1389119 cni.go:84] Creating CNI manager for ""
	I0803 23:39:20.652324 1389119 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0803 23:39:20.652333 1389119 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0803 23:39:20.652374 1389119 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-820414 NodeName:old-k8s-version-820414 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt
StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0803 23:39:20.652580 1389119 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "old-k8s-version-820414"
	  kubeletExtraArgs:
	    node-ip: 192.168.76.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0803 23:39:20.652662 1389119 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0803 23:39:20.661874 1389119 binaries.go:44] Found k8s binaries, skipping transfer
	I0803 23:39:20.661952 1389119 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0803 23:39:20.670672 1389119 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (442 bytes)
	I0803 23:39:20.689127 1389119 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0803 23:39:20.708487 1389119 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2125 bytes)
	I0803 23:39:20.726944 1389119 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I0803 23:39:20.730383 1389119 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0803 23:39:20.741587 1389119 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0803 23:39:20.834974 1389119 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0803 23:39:20.853387 1389119 certs.go:68] Setting up /home/jenkins/minikube-integration/19364-1180294/.minikube/profiles/old-k8s-version-820414 for IP: 192.168.76.2
	I0803 23:39:20.853413 1389119 certs.go:194] generating shared ca certs ...
	I0803 23:39:20.853430 1389119 certs.go:226] acquiring lock for ca certs: {Name:mk245d61d460943c9f9c4518cc1e3561b25bafd3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0803 23:39:20.853567 1389119 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19364-1180294/.minikube/ca.key
	I0803 23:39:20.853625 1389119 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19364-1180294/.minikube/proxy-client-ca.key
	I0803 23:39:20.853648 1389119 certs.go:256] generating profile certs ...
	I0803 23:39:20.853739 1389119 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19364-1180294/.minikube/profiles/old-k8s-version-820414/client.key
	I0803 23:39:20.853817 1389119 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19364-1180294/.minikube/profiles/old-k8s-version-820414/apiserver.key.f3883ae6
	I0803 23:39:20.853860 1389119 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19364-1180294/.minikube/profiles/old-k8s-version-820414/proxy-client.key
	I0803 23:39:20.853977 1389119 certs.go:484] found cert: /home/jenkins/minikube-integration/19364-1180294/.minikube/certs/1185702.pem (1338 bytes)
	W0803 23:39:20.854016 1389119 certs.go:480] ignoring /home/jenkins/minikube-integration/19364-1180294/.minikube/certs/1185702_empty.pem, impossibly tiny 0 bytes
	I0803 23:39:20.854031 1389119 certs.go:484] found cert: /home/jenkins/minikube-integration/19364-1180294/.minikube/certs/ca-key.pem (1679 bytes)
	I0803 23:39:20.854055 1389119 certs.go:484] found cert: /home/jenkins/minikube-integration/19364-1180294/.minikube/certs/ca.pem (1078 bytes)
	I0803 23:39:20.854081 1389119 certs.go:484] found cert: /home/jenkins/minikube-integration/19364-1180294/.minikube/certs/cert.pem (1123 bytes)
	I0803 23:39:20.854111 1389119 certs.go:484] found cert: /home/jenkins/minikube-integration/19364-1180294/.minikube/certs/key.pem (1675 bytes)
	I0803 23:39:20.854160 1389119 certs.go:484] found cert: /home/jenkins/minikube-integration/19364-1180294/.minikube/files/etc/ssl/certs/11857022.pem (1708 bytes)
	I0803 23:39:20.854874 1389119 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-1180294/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0803 23:39:20.883958 1389119 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-1180294/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0803 23:39:20.911574 1389119 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-1180294/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0803 23:39:20.937587 1389119 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-1180294/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0803 23:39:20.964570 1389119 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-1180294/.minikube/profiles/old-k8s-version-820414/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0803 23:39:21.003546 1389119 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-1180294/.minikube/profiles/old-k8s-version-820414/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0803 23:39:21.035089 1389119 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-1180294/.minikube/profiles/old-k8s-version-820414/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0803 23:39:21.065489 1389119 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-1180294/.minikube/profiles/old-k8s-version-820414/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0803 23:39:21.093262 1389119 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-1180294/.minikube/files/etc/ssl/certs/11857022.pem --> /usr/share/ca-certificates/11857022.pem (1708 bytes)
	I0803 23:39:21.120534 1389119 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-1180294/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0803 23:39:21.148396 1389119 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-1180294/.minikube/certs/1185702.pem --> /usr/share/ca-certificates/1185702.pem (1338 bytes)
	I0803 23:39:21.174201 1389119 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0803 23:39:21.193998 1389119 ssh_runner.go:195] Run: openssl version
	I0803 23:39:21.201041 1389119 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11857022.pem && ln -fs /usr/share/ca-certificates/11857022.pem /etc/ssl/certs/11857022.pem"
	I0803 23:39:21.211471 1389119 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11857022.pem
	I0803 23:39:21.215155 1389119 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug  3 22:59 /usr/share/ca-certificates/11857022.pem
	I0803 23:39:21.215223 1389119 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11857022.pem
	I0803 23:39:21.222036 1389119 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/11857022.pem /etc/ssl/certs/3ec20f2e.0"
	I0803 23:39:21.230857 1389119 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0803 23:39:21.240251 1389119 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0803 23:39:21.243901 1389119 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug  3 22:49 /usr/share/ca-certificates/minikubeCA.pem
	I0803 23:39:21.244016 1389119 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0803 23:39:21.251025 1389119 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0803 23:39:21.260023 1389119 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1185702.pem && ln -fs /usr/share/ca-certificates/1185702.pem /etc/ssl/certs/1185702.pem"
	I0803 23:39:21.269879 1389119 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1185702.pem
	I0803 23:39:21.273517 1389119 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug  3 22:59 /usr/share/ca-certificates/1185702.pem
	I0803 23:39:21.273621 1389119 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1185702.pem
	I0803 23:39:21.281111 1389119 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1185702.pem /etc/ssl/certs/51391683.0"
	I0803 23:39:21.290623 1389119 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0803 23:39:21.294674 1389119 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0803 23:39:21.301950 1389119 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0803 23:39:21.308952 1389119 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0803 23:39:21.315684 1389119 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0803 23:39:21.322720 1389119 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0803 23:39:21.329677 1389119 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0803 23:39:21.336543 1389119 kubeadm.go:392] StartCluster: {Name:old-k8s-version-820414 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-820414 Namespace:default APIServerHAVIP: APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/hom
e/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0803 23:39:21.336656 1389119 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0803 23:39:21.336805 1389119 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0803 23:39:21.373893 1389119 cri.go:89] found id: "0b050dde3721b6d4dd3b0797f920eef0da5c15babdef3596394c04b62e4cce35"
	I0803 23:39:21.373917 1389119 cri.go:89] found id: "1fb74e635c11f9e357abcbe302bff95b9c25381dfe33de15354e0eedbc5564f4"
	I0803 23:39:21.373922 1389119 cri.go:89] found id: "0d22eec037f1c7aab116e4aa78c05368658c4097a76b252c826555e1a25f0d34"
	I0803 23:39:21.373926 1389119 cri.go:89] found id: "9d8357e9b98e5a01f551902057365a6f43c698ef3ffda88640f432be5ca342dc"
	I0803 23:39:21.373929 1389119 cri.go:89] found id: "4f46c8a0a157fd605ed8713e34a2776ca3d9ebdc5840738a522ffaaae27933e7"
	I0803 23:39:21.373933 1389119 cri.go:89] found id: "e15be75c9111b317b4e56baaf7f93bc458c9080032ce09cbdd0e3fa77a17cb5e"
	I0803 23:39:21.373936 1389119 cri.go:89] found id: "fb5f0343694ff099d26f408ccb02c199901b58111f3e21cf3de3c23a210d7b6d"
	I0803 23:39:21.373939 1389119 cri.go:89] found id: "17af5d5b36407e3b6155abcf2050b778e7fba88e3816192cc23567d11a94cc3e"
	I0803 23:39:21.373942 1389119 cri.go:89] found id: "1cf5b72b69cb46a2f0ebd44d87f295610e0a42f9bfb29e2fb3e863c1645de1b9"
	I0803 23:39:21.373951 1389119 cri.go:89] found id: ""
	I0803 23:39:21.374004 1389119 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I0803 23:39:21.394398 1389119 cri.go:116] JSON = null
	W0803 23:39:21.394471 1389119 kubeadm.go:399] unpause failed: list paused: list returned 0 containers, but ps returned 9
	I0803 23:39:21.394559 1389119 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0803 23:39:21.403483 1389119 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0803 23:39:21.403502 1389119 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0803 23:39:21.403579 1389119 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0803 23:39:21.412490 1389119 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0803 23:39:21.413177 1389119 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-820414" does not appear in /home/jenkins/minikube-integration/19364-1180294/kubeconfig
	I0803 23:39:21.413475 1389119 kubeconfig.go:62] /home/jenkins/minikube-integration/19364-1180294/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-820414" cluster setting kubeconfig missing "old-k8s-version-820414" context setting]
	I0803 23:39:21.413921 1389119 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19364-1180294/kubeconfig: {Name:mk7ac442c13ee76103bb330a149278eea8a7c99f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0803 23:39:21.415337 1389119 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0803 23:39:21.425377 1389119 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.76.2
	I0803 23:39:21.425465 1389119 kubeadm.go:597] duration metric: took 21.955708ms to restartPrimaryControlPlane
	I0803 23:39:21.425481 1389119 kubeadm.go:394] duration metric: took 88.959354ms to StartCluster
	I0803 23:39:21.425497 1389119 settings.go:142] acquiring lock: {Name:mk6781ca2b0427afb2b67408884ede06d33d8dbf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0803 23:39:21.425592 1389119 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19364-1180294/kubeconfig
	I0803 23:39:21.426544 1389119 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19364-1180294/kubeconfig: {Name:mk7ac442c13ee76103bb330a149278eea8a7c99f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0803 23:39:21.426781 1389119 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0803 23:39:21.427110 1389119 config.go:182] Loaded profile config "old-k8s-version-820414": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
	I0803 23:39:21.427192 1389119 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0803 23:39:21.427291 1389119 addons.go:69] Setting storage-provisioner=true in profile "old-k8s-version-820414"
	I0803 23:39:21.427313 1389119 addons.go:69] Setting dashboard=true in profile "old-k8s-version-820414"
	I0803 23:39:21.427357 1389119 addons.go:234] Setting addon dashboard=true in "old-k8s-version-820414"
	W0803 23:39:21.427369 1389119 addons.go:243] addon dashboard should already be in state true
	I0803 23:39:21.427386 1389119 addons.go:234] Setting addon storage-provisioner=true in "old-k8s-version-820414"
	W0803 23:39:21.427408 1389119 addons.go:243] addon storage-provisioner should already be in state true
	I0803 23:39:21.427484 1389119 host.go:66] Checking if "old-k8s-version-820414" exists ...
	I0803 23:39:21.427395 1389119 host.go:66] Checking if "old-k8s-version-820414" exists ...
	I0803 23:39:21.427927 1389119 cli_runner.go:164] Run: docker container inspect old-k8s-version-820414 --format={{.State.Status}}
	I0803 23:39:21.427982 1389119 cli_runner.go:164] Run: docker container inspect old-k8s-version-820414 --format={{.State.Status}}
	I0803 23:39:21.427305 1389119 addons.go:69] Setting default-storageclass=true in profile "old-k8s-version-820414"
	I0803 23:39:21.428610 1389119 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-820414"
	I0803 23:39:21.428892 1389119 cli_runner.go:164] Run: docker container inspect old-k8s-version-820414 --format={{.State.Status}}
	I0803 23:39:21.427400 1389119 addons.go:69] Setting metrics-server=true in profile "old-k8s-version-820414"
	I0803 23:39:21.431139 1389119 addons.go:234] Setting addon metrics-server=true in "old-k8s-version-820414"
	W0803 23:39:21.431153 1389119 addons.go:243] addon metrics-server should already be in state true
	I0803 23:39:21.431201 1389119 host.go:66] Checking if "old-k8s-version-820414" exists ...
	I0803 23:39:21.431619 1389119 cli_runner.go:164] Run: docker container inspect old-k8s-version-820414 --format={{.State.Status}}
	I0803 23:39:21.433415 1389119 out.go:177] * Verifying Kubernetes components...
	I0803 23:39:21.435550 1389119 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0803 23:39:21.471158 1389119 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0803 23:39:21.473006 1389119 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I0803 23:39:21.474906 1389119 addons.go:431] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0803 23:39:21.474930 1389119 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0803 23:39:21.475011 1389119 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-820414
	I0803 23:39:21.485592 1389119 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0803 23:39:21.488319 1389119 addons.go:234] Setting addon default-storageclass=true in "old-k8s-version-820414"
	W0803 23:39:21.488339 1389119 addons.go:243] addon default-storageclass should already be in state true
	I0803 23:39:21.488368 1389119 host.go:66] Checking if "old-k8s-version-820414" exists ...
	I0803 23:39:21.488850 1389119 cli_runner.go:164] Run: docker container inspect old-k8s-version-820414 --format={{.State.Status}}
	I0803 23:39:21.496020 1389119 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0803 23:39:21.496045 1389119 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0803 23:39:21.496114 1389119 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-820414
	I0803 23:39:21.508038 1389119 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0803 23:39:21.510747 1389119 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0803 23:39:21.510773 1389119 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0803 23:39:21.510854 1389119 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-820414
	I0803 23:39:21.542708 1389119 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34543 SSHKeyPath:/home/jenkins/minikube-integration/19364-1180294/.minikube/machines/old-k8s-version-820414/id_rsa Username:docker}
	I0803 23:39:21.548948 1389119 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34543 SSHKeyPath:/home/jenkins/minikube-integration/19364-1180294/.minikube/machines/old-k8s-version-820414/id_rsa Username:docker}
	I0803 23:39:21.571614 1389119 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0803 23:39:21.571636 1389119 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0803 23:39:21.571697 1389119 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-820414
	I0803 23:39:21.580175 1389119 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34543 SSHKeyPath:/home/jenkins/minikube-integration/19364-1180294/.minikube/machines/old-k8s-version-820414/id_rsa Username:docker}
	I0803 23:39:21.605102 1389119 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0803 23:39:21.626523 1389119 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34543 SSHKeyPath:/home/jenkins/minikube-integration/19364-1180294/.minikube/machines/old-k8s-version-820414/id_rsa Username:docker}
	I0803 23:39:21.635796 1389119 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-820414" to be "Ready" ...
	I0803 23:39:21.693996 1389119 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0803 23:39:21.698163 1389119 addons.go:431] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0803 23:39:21.698232 1389119 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0803 23:39:21.737923 1389119 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0803 23:39:21.738006 1389119 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0803 23:39:21.744387 1389119 addons.go:431] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0803 23:39:21.744452 1389119 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0803 23:39:21.768065 1389119 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0803 23:39:21.793998 1389119 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0803 23:39:21.794077 1389119 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0803 23:39:21.801989 1389119 addons.go:431] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0803 23:39:21.802061 1389119 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	W0803 23:39:21.850040 1389119 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0803 23:39:21.850140 1389119 retry.go:31] will retry after 128.007349ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0803 23:39:21.859778 1389119 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0803 23:39:21.859843 1389119 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0803 23:39:21.863654 1389119 addons.go:431] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0803 23:39:21.863725 1389119 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0803 23:39:21.896921 1389119 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0803 23:39:21.904113 1389119 addons.go:431] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0803 23:39:21.904185 1389119 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	W0803 23:39:21.921449 1389119 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0803 23:39:21.921543 1389119 retry.go:31] will retry after 245.560396ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0803 23:39:21.927461 1389119 addons.go:431] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0803 23:39:21.927537 1389119 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0803 23:39:21.948230 1389119 addons.go:431] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0803 23:39:21.948306 1389119 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0803 23:39:21.968344 1389119 addons.go:431] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0803 23:39:21.968417 1389119 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0803 23:39:21.978910 1389119 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0803 23:39:21.994171 1389119 addons.go:431] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0803 23:39:21.994245 1389119 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0803 23:39:22.017685 1389119 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0803 23:39:22.032600 1389119 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0803 23:39:22.032703 1389119 retry.go:31] will retry after 197.571782ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0803 23:39:22.118873 1389119 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0803 23:39:22.118964 1389119 retry.go:31] will retry after 255.425122ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0803 23:39:22.123231 1389119 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0803 23:39:22.123261 1389119 retry.go:31] will retry after 176.596072ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0803 23:39:22.168232 1389119 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0803 23:39:22.230671 1389119 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W0803 23:39:22.249149 1389119 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0803 23:39:22.249231 1389119 retry.go:31] will retry after 508.639867ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0803 23:39:22.300345 1389119 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0803 23:39:22.305870 1389119 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0803 23:39:22.305949 1389119 retry.go:31] will retry after 465.648026ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0803 23:39:22.373240 1389119 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0803 23:39:22.373274 1389119 retry.go:31] will retry after 415.489661ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0803 23:39:22.375408 1389119 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0803 23:39:22.446809 1389119 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0803 23:39:22.446852 1389119 retry.go:31] will retry after 636.411292ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0803 23:39:22.758724 1389119 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0803 23:39:22.772057 1389119 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0803 23:39:22.789559 1389119 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0803 23:39:22.896183 1389119 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0803 23:39:22.896285 1389119 retry.go:31] will retry after 508.783559ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0803 23:39:22.902878 1389119 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0803 23:39:22.902948 1389119 retry.go:31] will retry after 708.844803ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0803 23:39:22.937218 1389119 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0803 23:39:22.937253 1389119 retry.go:31] will retry after 792.461535ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0803 23:39:23.084397 1389119 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0803 23:39:23.163511 1389119 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0803 23:39:23.163581 1389119 retry.go:31] will retry after 570.502839ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0803 23:39:23.405875 1389119 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0803 23:39:23.477480 1389119 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0803 23:39:23.477512 1389119 retry.go:31] will retry after 844.473972ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0803 23:39:23.612787 1389119 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0803 23:39:23.636324 1389119 node_ready.go:53] error getting node "old-k8s-version-820414": Get "https://192.168.76.2:8443/api/v1/nodes/old-k8s-version-820414": dial tcp 192.168.76.2:8443: connect: connection refused
	W0803 23:39:23.683219 1389119 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0803 23:39:23.683250 1389119 retry.go:31] will retry after 1.078889224s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0803 23:39:23.730435 1389119 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0803 23:39:23.734689 1389119 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0803 23:39:23.820951 1389119 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0803 23:39:23.820985 1389119 retry.go:31] will retry after 775.450909ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0803 23:39:23.842324 1389119 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0803 23:39:23.842409 1389119 retry.go:31] will retry after 1.683951311s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0803 23:39:24.322844 1389119 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0803 23:39:24.399836 1389119 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0803 23:39:24.399870 1389119 retry.go:31] will retry after 687.701479ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0803 23:39:24.597320 1389119 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0803 23:39:24.671381 1389119 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0803 23:39:24.671411 1389119 retry.go:31] will retry after 1.287278186s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0803 23:39:24.762643 1389119 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W0803 23:39:24.835658 1389119 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0803 23:39:24.835701 1389119 retry.go:31] will retry after 1.088263781s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0803 23:39:25.088399 1389119 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0803 23:39:25.166040 1389119 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0803 23:39:25.166072 1389119 retry.go:31] will retry after 1.596171138s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0803 23:39:25.526493 1389119 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0803 23:39:25.600067 1389119 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0803 23:39:25.600101 1389119 retry.go:31] will retry after 1.06164572s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0803 23:39:25.636588 1389119 node_ready.go:53] error getting node "old-k8s-version-820414": Get "https://192.168.76.2:8443/api/v1/nodes/old-k8s-version-820414": dial tcp 192.168.76.2:8443: connect: connection refused
	I0803 23:39:25.925080 1389119 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0803 23:39:25.959390 1389119 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0803 23:39:26.066157 1389119 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0803 23:39:26.066192 1389119 retry.go:31] will retry after 2.156384873s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0803 23:39:26.071798 1389119 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0803 23:39:26.071834 1389119 retry.go:31] will retry after 1.880769569s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0803 23:39:26.662216 1389119 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0803 23:39:26.738148 1389119 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0803 23:39:26.738180 1389119 retry.go:31] will retry after 1.515576265s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0803 23:39:26.763454 1389119 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0803 23:39:26.841402 1389119 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0803 23:39:26.841434 1389119 retry.go:31] will retry after 3.263692273s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0803 23:39:27.636989 1389119 node_ready.go:53] error getting node "old-k8s-version-820414": Get "https://192.168.76.2:8443/api/v1/nodes/old-k8s-version-820414": dial tcp 192.168.76.2:8443: connect: connection refused
	I0803 23:39:27.953533 1389119 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0803 23:39:28.030038 1389119 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0803 23:39:28.030069 1389119 retry.go:31] will retry after 2.201382318s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0803 23:39:28.222950 1389119 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0803 23:39:28.254293 1389119 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0803 23:39:28.316548 1389119 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0803 23:39:28.316574 1389119 retry.go:31] will retry after 2.51808289s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0803 23:39:28.354924 1389119 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0803 23:39:28.354955 1389119 retry.go:31] will retry after 4.650331283s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0803 23:39:30.106047 1389119 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0803 23:39:30.137302 1389119 node_ready.go:53] error getting node "old-k8s-version-820414": Get "https://192.168.76.2:8443/api/v1/nodes/old-k8s-version-820414": dial tcp 192.168.76.2:8443: connect: connection refused
	I0803 23:39:30.231647 1389119 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0803 23:39:30.343695 1389119 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0803 23:39:30.343736 1389119 retry.go:31] will retry after 5.478776976s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0803 23:39:30.572672 1389119 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0803 23:39:30.572703 1389119 retry.go:31] will retry after 6.028650682s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0803 23:39:30.835398 1389119 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0803 23:39:33.005953 1389119 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0803 23:39:35.822756 1389119 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0803 23:39:36.601551 1389119 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0803 23:39:40.555589 1389119 node_ready.go:49] node "old-k8s-version-820414" has status "Ready":"True"
	I0803 23:39:40.555614 1389119 node_ready.go:38] duration metric: took 18.919779992s for node "old-k8s-version-820414" to be "Ready" ...
	I0803 23:39:40.555624 1389119 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0803 23:39:40.741621 1389119 pod_ready.go:78] waiting up to 6m0s for pod "coredns-74ff55c5b-xng8r" in "kube-system" namespace to be "Ready" ...
	I0803 23:39:41.025638 1389119 pod_ready.go:92] pod "coredns-74ff55c5b-xng8r" in "kube-system" namespace has status "Ready":"True"
	I0803 23:39:41.025663 1389119 pod_ready.go:81] duration metric: took 283.956982ms for pod "coredns-74ff55c5b-xng8r" in "kube-system" namespace to be "Ready" ...
	I0803 23:39:41.025678 1389119 pod_ready.go:78] waiting up to 6m0s for pod "etcd-old-k8s-version-820414" in "kube-system" namespace to be "Ready" ...
	I0803 23:39:41.106757 1389119 pod_ready.go:92] pod "etcd-old-k8s-version-820414" in "kube-system" namespace has status "Ready":"True"
	I0803 23:39:41.106832 1389119 pod_ready.go:81] duration metric: took 81.145094ms for pod "etcd-old-k8s-version-820414" in "kube-system" namespace to be "Ready" ...
	I0803 23:39:41.106868 1389119 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-old-k8s-version-820414" in "kube-system" namespace to be "Ready" ...
	I0803 23:39:41.174787 1389119 pod_ready.go:92] pod "kube-apiserver-old-k8s-version-820414" in "kube-system" namespace has status "Ready":"True"
	I0803 23:39:41.174809 1389119 pod_ready.go:81] duration metric: took 67.904355ms for pod "kube-apiserver-old-k8s-version-820414" in "kube-system" namespace to be "Ready" ...
	I0803 23:39:41.174821 1389119 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-old-k8s-version-820414" in "kube-system" namespace to be "Ready" ...
	I0803 23:39:42.755243 1389119 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (11.919797637s)
	I0803 23:39:42.755287 1389119 addons.go:475] Verifying addon metrics-server=true in "old-k8s-version-820414"
	I0803 23:39:42.755338 1389119 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: (9.749358578s)
	I0803 23:39:42.755378 1389119 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: (6.932600652s)
	I0803 23:39:43.182452 1389119 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (6.580841286s)
	I0803 23:39:43.185067 1389119 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p old-k8s-version-820414 addons enable metrics-server
	
	I0803 23:39:43.187600 1389119 out.go:177] * Enabled addons: metrics-server, storage-provisioner, default-storageclass, dashboard
	I0803 23:39:43.189814 1389119 addons.go:510] duration metric: took 21.762617209s for enable addons: enabled=[metrics-server storage-provisioner default-storageclass dashboard]
	I0803 23:39:43.202941 1389119 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-820414" in "kube-system" namespace has status "Ready":"False"
	I0803 23:39:45.266307 1389119 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-820414" in "kube-system" namespace has status "Ready":"False"
	I0803 23:39:47.680859 1389119 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-820414" in "kube-system" namespace has status "Ready":"False"
	I0803 23:39:50.182555 1389119 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-820414" in "kube-system" namespace has status "Ready":"False"
	I0803 23:39:52.684009 1389119 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-820414" in "kube-system" namespace has status "Ready":"False"
	I0803 23:39:55.181569 1389119 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-820414" in "kube-system" namespace has status "Ready":"False"
	I0803 23:39:57.680545 1389119 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-820414" in "kube-system" namespace has status "Ready":"False"
	I0803 23:39:59.687863 1389119 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-820414" in "kube-system" namespace has status "Ready":"False"
	I0803 23:40:02.182155 1389119 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-820414" in "kube-system" namespace has status "Ready":"False"
	I0803 23:40:04.193109 1389119 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-820414" in "kube-system" namespace has status "Ready":"False"
	I0803 23:40:06.689725 1389119 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-820414" in "kube-system" namespace has status "Ready":"False"
	I0803 23:40:09.182065 1389119 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-820414" in "kube-system" namespace has status "Ready":"False"
	I0803 23:40:11.683059 1389119 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-820414" in "kube-system" namespace has status "Ready":"False"
	I0803 23:40:14.184274 1389119 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-820414" in "kube-system" namespace has status "Ready":"False"
	I0803 23:40:16.185511 1389119 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-820414" in "kube-system" namespace has status "Ready":"False"
	I0803 23:40:18.681816 1389119 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-820414" in "kube-system" namespace has status "Ready":"False"
	I0803 23:40:20.685637 1389119 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-820414" in "kube-system" namespace has status "Ready":"False"
	I0803 23:40:23.181594 1389119 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-820414" in "kube-system" namespace has status "Ready":"False"
	I0803 23:40:25.181967 1389119 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-820414" in "kube-system" namespace has status "Ready":"False"
	I0803 23:40:27.683321 1389119 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-820414" in "kube-system" namespace has status "Ready":"False"
	I0803 23:40:30.183332 1389119 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-820414" in "kube-system" namespace has status "Ready":"False"
	I0803 23:40:32.680888 1389119 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-820414" in "kube-system" namespace has status "Ready":"False"
	I0803 23:40:35.181934 1389119 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-820414" in "kube-system" namespace has status "Ready":"False"
	I0803 23:40:37.183150 1389119 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-820414" in "kube-system" namespace has status "Ready":"False"
	I0803 23:40:39.681416 1389119 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-820414" in "kube-system" namespace has status "Ready":"False"
	I0803 23:40:42.183279 1389119 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-820414" in "kube-system" namespace has status "Ready":"False"
	I0803 23:40:42.681307 1389119 pod_ready.go:92] pod "kube-controller-manager-old-k8s-version-820414" in "kube-system" namespace has status "Ready":"True"
	I0803 23:40:42.681337 1389119 pod_ready.go:81] duration metric: took 1m1.506507724s for pod "kube-controller-manager-old-k8s-version-820414" in "kube-system" namespace to be "Ready" ...
	I0803 23:40:42.681349 1389119 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-rgk96" in "kube-system" namespace to be "Ready" ...
	I0803 23:40:42.686130 1389119 pod_ready.go:92] pod "kube-proxy-rgk96" in "kube-system" namespace has status "Ready":"True"
	I0803 23:40:42.686218 1389119 pod_ready.go:81] duration metric: took 4.860597ms for pod "kube-proxy-rgk96" in "kube-system" namespace to be "Ready" ...
	I0803 23:40:42.686239 1389119 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-old-k8s-version-820414" in "kube-system" namespace to be "Ready" ...
	I0803 23:40:44.692453 1389119 pod_ready.go:102] pod "kube-scheduler-old-k8s-version-820414" in "kube-system" namespace has status "Ready":"False"
	I0803 23:40:46.693314 1389119 pod_ready.go:102] pod "kube-scheduler-old-k8s-version-820414" in "kube-system" namespace has status "Ready":"False"
	I0803 23:40:49.192832 1389119 pod_ready.go:102] pod "kube-scheduler-old-k8s-version-820414" in "kube-system" namespace has status "Ready":"False"
	I0803 23:40:51.691969 1389119 pod_ready.go:102] pod "kube-scheduler-old-k8s-version-820414" in "kube-system" namespace has status "Ready":"False"
	I0803 23:40:53.693414 1389119 pod_ready.go:102] pod "kube-scheduler-old-k8s-version-820414" in "kube-system" namespace has status "Ready":"False"
	I0803 23:40:56.192363 1389119 pod_ready.go:102] pod "kube-scheduler-old-k8s-version-820414" in "kube-system" namespace has status "Ready":"False"
	I0803 23:40:58.193775 1389119 pod_ready.go:102] pod "kube-scheduler-old-k8s-version-820414" in "kube-system" namespace has status "Ready":"False"
	I0803 23:41:00.207335 1389119 pod_ready.go:102] pod "kube-scheduler-old-k8s-version-820414" in "kube-system" namespace has status "Ready":"False"
	I0803 23:41:02.691889 1389119 pod_ready.go:102] pod "kube-scheduler-old-k8s-version-820414" in "kube-system" namespace has status "Ready":"False"
	I0803 23:41:05.192573 1389119 pod_ready.go:102] pod "kube-scheduler-old-k8s-version-820414" in "kube-system" namespace has status "Ready":"False"
	I0803 23:41:06.692503 1389119 pod_ready.go:92] pod "kube-scheduler-old-k8s-version-820414" in "kube-system" namespace has status "Ready":"True"
	I0803 23:41:06.692533 1389119 pod_ready.go:81] duration metric: took 24.006284143s for pod "kube-scheduler-old-k8s-version-820414" in "kube-system" namespace to be "Ready" ...
	I0803 23:41:06.692545 1389119 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-9975d5f86-wm57q" in "kube-system" namespace to be "Ready" ...
	I0803 23:41:08.699230 1389119 pod_ready.go:102] pod "metrics-server-9975d5f86-wm57q" in "kube-system" namespace has status "Ready":"False"
	I0803 23:41:10.699417 1389119 pod_ready.go:102] pod "metrics-server-9975d5f86-wm57q" in "kube-system" namespace has status "Ready":"False"
	I0803 23:41:13.199392 1389119 pod_ready.go:102] pod "metrics-server-9975d5f86-wm57q" in "kube-system" namespace has status "Ready":"False"
	I0803 23:41:15.200180 1389119 pod_ready.go:102] pod "metrics-server-9975d5f86-wm57q" in "kube-system" namespace has status "Ready":"False"
	I0803 23:41:17.698768 1389119 pod_ready.go:102] pod "metrics-server-9975d5f86-wm57q" in "kube-system" namespace has status "Ready":"False"
	I0803 23:41:20.199955 1389119 pod_ready.go:102] pod "metrics-server-9975d5f86-wm57q" in "kube-system" namespace has status "Ready":"False"
	I0803 23:41:22.699326 1389119 pod_ready.go:102] pod "metrics-server-9975d5f86-wm57q" in "kube-system" namespace has status "Ready":"False"
	I0803 23:41:25.198606 1389119 pod_ready.go:102] pod "metrics-server-9975d5f86-wm57q" in "kube-system" namespace has status "Ready":"False"
	I0803 23:41:27.199057 1389119 pod_ready.go:102] pod "metrics-server-9975d5f86-wm57q" in "kube-system" namespace has status "Ready":"False"
	I0803 23:41:29.199602 1389119 pod_ready.go:102] pod "metrics-server-9975d5f86-wm57q" in "kube-system" namespace has status "Ready":"False"
	I0803 23:41:31.698396 1389119 pod_ready.go:102] pod "metrics-server-9975d5f86-wm57q" in "kube-system" namespace has status "Ready":"False"
	I0803 23:41:33.700341 1389119 pod_ready.go:102] pod "metrics-server-9975d5f86-wm57q" in "kube-system" namespace has status "Ready":"False"
	I0803 23:41:36.199366 1389119 pod_ready.go:102] pod "metrics-server-9975d5f86-wm57q" in "kube-system" namespace has status "Ready":"False"
	I0803 23:41:38.201693 1389119 pod_ready.go:102] pod "metrics-server-9975d5f86-wm57q" in "kube-system" namespace has status "Ready":"False"
	I0803 23:41:40.703129 1389119 pod_ready.go:102] pod "metrics-server-9975d5f86-wm57q" in "kube-system" namespace has status "Ready":"False"
	I0803 23:41:43.199000 1389119 pod_ready.go:102] pod "metrics-server-9975d5f86-wm57q" in "kube-system" namespace has status "Ready":"False"
	I0803 23:41:45.200504 1389119 pod_ready.go:102] pod "metrics-server-9975d5f86-wm57q" in "kube-system" namespace has status "Ready":"False"
	I0803 23:41:47.699001 1389119 pod_ready.go:102] pod "metrics-server-9975d5f86-wm57q" in "kube-system" namespace has status "Ready":"False"
	I0803 23:41:49.699340 1389119 pod_ready.go:102] pod "metrics-server-9975d5f86-wm57q" in "kube-system" namespace has status "Ready":"False"
	I0803 23:41:51.699432 1389119 pod_ready.go:102] pod "metrics-server-9975d5f86-wm57q" in "kube-system" namespace has status "Ready":"False"
	I0803 23:41:53.700360 1389119 pod_ready.go:102] pod "metrics-server-9975d5f86-wm57q" in "kube-system" namespace has status "Ready":"False"
	I0803 23:41:55.700581 1389119 pod_ready.go:102] pod "metrics-server-9975d5f86-wm57q" in "kube-system" namespace has status "Ready":"False"
	I0803 23:41:58.198268 1389119 pod_ready.go:102] pod "metrics-server-9975d5f86-wm57q" in "kube-system" namespace has status "Ready":"False"
	I0803 23:42:00.224981 1389119 pod_ready.go:102] pod "metrics-server-9975d5f86-wm57q" in "kube-system" namespace has status "Ready":"False"
	I0803 23:42:02.698424 1389119 pod_ready.go:102] pod "metrics-server-9975d5f86-wm57q" in "kube-system" namespace has status "Ready":"False"
	I0803 23:42:04.699134 1389119 pod_ready.go:102] pod "metrics-server-9975d5f86-wm57q" in "kube-system" namespace has status "Ready":"False"
	I0803 23:42:07.198826 1389119 pod_ready.go:102] pod "metrics-server-9975d5f86-wm57q" in "kube-system" namespace has status "Ready":"False"
	I0803 23:42:09.698037 1389119 pod_ready.go:102] pod "metrics-server-9975d5f86-wm57q" in "kube-system" namespace has status "Ready":"False"
	I0803 23:42:11.699478 1389119 pod_ready.go:102] pod "metrics-server-9975d5f86-wm57q" in "kube-system" namespace has status "Ready":"False"
	I0803 23:42:14.199026 1389119 pod_ready.go:102] pod "metrics-server-9975d5f86-wm57q" in "kube-system" namespace has status "Ready":"False"
	I0803 23:42:16.699028 1389119 pod_ready.go:102] pod "metrics-server-9975d5f86-wm57q" in "kube-system" namespace has status "Ready":"False"
	I0803 23:42:18.699474 1389119 pod_ready.go:102] pod "metrics-server-9975d5f86-wm57q" in "kube-system" namespace has status "Ready":"False"
	I0803 23:42:21.199796 1389119 pod_ready.go:102] pod "metrics-server-9975d5f86-wm57q" in "kube-system" namespace has status "Ready":"False"
	I0803 23:42:23.701490 1389119 pod_ready.go:102] pod "metrics-server-9975d5f86-wm57q" in "kube-system" namespace has status "Ready":"False"
	I0803 23:42:26.198342 1389119 pod_ready.go:102] pod "metrics-server-9975d5f86-wm57q" in "kube-system" namespace has status "Ready":"False"
	I0803 23:42:28.199086 1389119 pod_ready.go:102] pod "metrics-server-9975d5f86-wm57q" in "kube-system" namespace has status "Ready":"False"
	I0803 23:42:30.200328 1389119 pod_ready.go:102] pod "metrics-server-9975d5f86-wm57q" in "kube-system" namespace has status "Ready":"False"
	I0803 23:42:32.699536 1389119 pod_ready.go:102] pod "metrics-server-9975d5f86-wm57q" in "kube-system" namespace has status "Ready":"False"
	I0803 23:42:35.199476 1389119 pod_ready.go:102] pod "metrics-server-9975d5f86-wm57q" in "kube-system" namespace has status "Ready":"False"
	I0803 23:42:37.698937 1389119 pod_ready.go:102] pod "metrics-server-9975d5f86-wm57q" in "kube-system" namespace has status "Ready":"False"
	I0803 23:42:40.198984 1389119 pod_ready.go:102] pod "metrics-server-9975d5f86-wm57q" in "kube-system" namespace has status "Ready":"False"
	I0803 23:42:42.204378 1389119 pod_ready.go:102] pod "metrics-server-9975d5f86-wm57q" in "kube-system" namespace has status "Ready":"False"
	I0803 23:42:44.698855 1389119 pod_ready.go:102] pod "metrics-server-9975d5f86-wm57q" in "kube-system" namespace has status "Ready":"False"
	I0803 23:42:46.699179 1389119 pod_ready.go:102] pod "metrics-server-9975d5f86-wm57q" in "kube-system" namespace has status "Ready":"False"
	I0803 23:42:48.702669 1389119 pod_ready.go:102] pod "metrics-server-9975d5f86-wm57q" in "kube-system" namespace has status "Ready":"False"
	I0803 23:42:51.199902 1389119 pod_ready.go:102] pod "metrics-server-9975d5f86-wm57q" in "kube-system" namespace has status "Ready":"False"
	I0803 23:42:53.200521 1389119 pod_ready.go:102] pod "metrics-server-9975d5f86-wm57q" in "kube-system" namespace has status "Ready":"False"
	I0803 23:42:55.698645 1389119 pod_ready.go:102] pod "metrics-server-9975d5f86-wm57q" in "kube-system" namespace has status "Ready":"False"
	I0803 23:42:57.703669 1389119 pod_ready.go:102] pod "metrics-server-9975d5f86-wm57q" in "kube-system" namespace has status "Ready":"False"
	I0803 23:43:00.218059 1389119 pod_ready.go:102] pod "metrics-server-9975d5f86-wm57q" in "kube-system" namespace has status "Ready":"False"
	I0803 23:43:02.698937 1389119 pod_ready.go:102] pod "metrics-server-9975d5f86-wm57q" in "kube-system" namespace has status "Ready":"False"
	I0803 23:43:05.198988 1389119 pod_ready.go:102] pod "metrics-server-9975d5f86-wm57q" in "kube-system" namespace has status "Ready":"False"
	I0803 23:43:07.698134 1389119 pod_ready.go:102] pod "metrics-server-9975d5f86-wm57q" in "kube-system" namespace has status "Ready":"False"
	I0803 23:43:09.700180 1389119 pod_ready.go:102] pod "metrics-server-9975d5f86-wm57q" in "kube-system" namespace has status "Ready":"False"
	I0803 23:43:12.223904 1389119 pod_ready.go:102] pod "metrics-server-9975d5f86-wm57q" in "kube-system" namespace has status "Ready":"False"
	I0803 23:43:14.698373 1389119 pod_ready.go:102] pod "metrics-server-9975d5f86-wm57q" in "kube-system" namespace has status "Ready":"False"
	I0803 23:43:16.698784 1389119 pod_ready.go:102] pod "metrics-server-9975d5f86-wm57q" in "kube-system" namespace has status "Ready":"False"
	I0803 23:43:18.699000 1389119 pod_ready.go:102] pod "metrics-server-9975d5f86-wm57q" in "kube-system" namespace has status "Ready":"False"
	I0803 23:43:21.198740 1389119 pod_ready.go:102] pod "metrics-server-9975d5f86-wm57q" in "kube-system" namespace has status "Ready":"False"
	I0803 23:43:23.199428 1389119 pod_ready.go:102] pod "metrics-server-9975d5f86-wm57q" in "kube-system" namespace has status "Ready":"False"
	I0803 23:43:25.199512 1389119 pod_ready.go:102] pod "metrics-server-9975d5f86-wm57q" in "kube-system" namespace has status "Ready":"False"
	I0803 23:43:27.698426 1389119 pod_ready.go:102] pod "metrics-server-9975d5f86-wm57q" in "kube-system" namespace has status "Ready":"False"
	I0803 23:43:30.200553 1389119 pod_ready.go:102] pod "metrics-server-9975d5f86-wm57q" in "kube-system" namespace has status "Ready":"False"
	I0803 23:43:32.699164 1389119 pod_ready.go:102] pod "metrics-server-9975d5f86-wm57q" in "kube-system" namespace has status "Ready":"False"
	I0803 23:43:35.198680 1389119 pod_ready.go:102] pod "metrics-server-9975d5f86-wm57q" in "kube-system" namespace has status "Ready":"False"
	I0803 23:43:37.199039 1389119 pod_ready.go:102] pod "metrics-server-9975d5f86-wm57q" in "kube-system" namespace has status "Ready":"False"
	I0803 23:43:39.199410 1389119 pod_ready.go:102] pod "metrics-server-9975d5f86-wm57q" in "kube-system" namespace has status "Ready":"False"
	I0803 23:43:41.698052 1389119 pod_ready.go:102] pod "metrics-server-9975d5f86-wm57q" in "kube-system" namespace has status "Ready":"False"
	I0803 23:43:43.698689 1389119 pod_ready.go:102] pod "metrics-server-9975d5f86-wm57q" in "kube-system" namespace has status "Ready":"False"
	I0803 23:43:46.203672 1389119 pod_ready.go:102] pod "metrics-server-9975d5f86-wm57q" in "kube-system" namespace has status "Ready":"False"
	I0803 23:43:48.699366 1389119 pod_ready.go:102] pod "metrics-server-9975d5f86-wm57q" in "kube-system" namespace has status "Ready":"False"
	I0803 23:43:51.198826 1389119 pod_ready.go:102] pod "metrics-server-9975d5f86-wm57q" in "kube-system" namespace has status "Ready":"False"
	I0803 23:43:53.698907 1389119 pod_ready.go:102] pod "metrics-server-9975d5f86-wm57q" in "kube-system" namespace has status "Ready":"False"
	I0803 23:43:56.199013 1389119 pod_ready.go:102] pod "metrics-server-9975d5f86-wm57q" in "kube-system" namespace has status "Ready":"False"
	I0803 23:43:58.698537 1389119 pod_ready.go:102] pod "metrics-server-9975d5f86-wm57q" in "kube-system" namespace has status "Ready":"False"
	I0803 23:44:00.699403 1389119 pod_ready.go:102] pod "metrics-server-9975d5f86-wm57q" in "kube-system" namespace has status "Ready":"False"
	I0803 23:44:02.705402 1389119 pod_ready.go:102] pod "metrics-server-9975d5f86-wm57q" in "kube-system" namespace has status "Ready":"False"
	I0803 23:44:05.198904 1389119 pod_ready.go:102] pod "metrics-server-9975d5f86-wm57q" in "kube-system" namespace has status "Ready":"False"
	I0803 23:44:07.700261 1389119 pod_ready.go:102] pod "metrics-server-9975d5f86-wm57q" in "kube-system" namespace has status "Ready":"False"
	I0803 23:44:10.198829 1389119 pod_ready.go:102] pod "metrics-server-9975d5f86-wm57q" in "kube-system" namespace has status "Ready":"False"
	I0803 23:44:12.199083 1389119 pod_ready.go:102] pod "metrics-server-9975d5f86-wm57q" in "kube-system" namespace has status "Ready":"False"
	I0803 23:44:14.199252 1389119 pod_ready.go:102] pod "metrics-server-9975d5f86-wm57q" in "kube-system" namespace has status "Ready":"False"
	I0803 23:44:16.698753 1389119 pod_ready.go:102] pod "metrics-server-9975d5f86-wm57q" in "kube-system" namespace has status "Ready":"False"
	I0803 23:44:19.199876 1389119 pod_ready.go:102] pod "metrics-server-9975d5f86-wm57q" in "kube-system" namespace has status "Ready":"False"
	I0803 23:44:21.698499 1389119 pod_ready.go:102] pod "metrics-server-9975d5f86-wm57q" in "kube-system" namespace has status "Ready":"False"
	I0803 23:44:23.698553 1389119 pod_ready.go:102] pod "metrics-server-9975d5f86-wm57q" in "kube-system" namespace has status "Ready":"False"
	I0803 23:44:26.200542 1389119 pod_ready.go:102] pod "metrics-server-9975d5f86-wm57q" in "kube-system" namespace has status "Ready":"False"
	I0803 23:44:28.698447 1389119 pod_ready.go:102] pod "metrics-server-9975d5f86-wm57q" in "kube-system" namespace has status "Ready":"False"
	I0803 23:44:30.698685 1389119 pod_ready.go:102] pod "metrics-server-9975d5f86-wm57q" in "kube-system" namespace has status "Ready":"False"
	I0803 23:44:32.698718 1389119 pod_ready.go:102] pod "metrics-server-9975d5f86-wm57q" in "kube-system" namespace has status "Ready":"False"
	I0803 23:44:34.699092 1389119 pod_ready.go:102] pod "metrics-server-9975d5f86-wm57q" in "kube-system" namespace has status "Ready":"False"
	I0803 23:44:37.198328 1389119 pod_ready.go:102] pod "metrics-server-9975d5f86-wm57q" in "kube-system" namespace has status "Ready":"False"
	I0803 23:44:39.199068 1389119 pod_ready.go:102] pod "metrics-server-9975d5f86-wm57q" in "kube-system" namespace has status "Ready":"False"
	I0803 23:44:41.698264 1389119 pod_ready.go:102] pod "metrics-server-9975d5f86-wm57q" in "kube-system" namespace has status "Ready":"False"
	I0803 23:44:43.699121 1389119 pod_ready.go:102] pod "metrics-server-9975d5f86-wm57q" in "kube-system" namespace has status "Ready":"False"
	I0803 23:44:46.199245 1389119 pod_ready.go:102] pod "metrics-server-9975d5f86-wm57q" in "kube-system" namespace has status "Ready":"False"
	I0803 23:44:48.699142 1389119 pod_ready.go:102] pod "metrics-server-9975d5f86-wm57q" in "kube-system" namespace has status "Ready":"False"
	I0803 23:44:51.198567 1389119 pod_ready.go:102] pod "metrics-server-9975d5f86-wm57q" in "kube-system" namespace has status "Ready":"False"
	I0803 23:44:53.698856 1389119 pod_ready.go:102] pod "metrics-server-9975d5f86-wm57q" in "kube-system" namespace has status "Ready":"False"
	I0803 23:44:56.199533 1389119 pod_ready.go:102] pod "metrics-server-9975d5f86-wm57q" in "kube-system" namespace has status "Ready":"False"
	I0803 23:44:58.697721 1389119 pod_ready.go:102] pod "metrics-server-9975d5f86-wm57q" in "kube-system" namespace has status "Ready":"False"
	I0803 23:45:00.699068 1389119 pod_ready.go:102] pod "metrics-server-9975d5f86-wm57q" in "kube-system" namespace has status "Ready":"False"
	I0803 23:45:02.700391 1389119 pod_ready.go:102] pod "metrics-server-9975d5f86-wm57q" in "kube-system" namespace has status "Ready":"False"
	I0803 23:45:05.198810 1389119 pod_ready.go:102] pod "metrics-server-9975d5f86-wm57q" in "kube-system" namespace has status "Ready":"False"
	I0803 23:45:06.699353 1389119 pod_ready.go:81] duration metric: took 4m0.006793463s for pod "metrics-server-9975d5f86-wm57q" in "kube-system" namespace to be "Ready" ...
	E0803 23:45:06.699379 1389119 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0803 23:45:06.699393 1389119 pod_ready.go:38] duration metric: took 5m26.143754509s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0803 23:45:06.699407 1389119 api_server.go:52] waiting for apiserver process to appear ...
	I0803 23:45:06.699437 1389119 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0803 23:45:06.699508 1389119 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0803 23:45:06.762175 1389119 cri.go:89] found id: "2dfe55d1797db163cd9199214e216cbd925e66788ac3213f4189fd6d49eb137d"
	I0803 23:45:06.762196 1389119 cri.go:89] found id: "fb5f0343694ff099d26f408ccb02c199901b58111f3e21cf3de3c23a210d7b6d"
	I0803 23:45:06.762200 1389119 cri.go:89] found id: ""
	I0803 23:45:06.762210 1389119 logs.go:276] 2 containers: [2dfe55d1797db163cd9199214e216cbd925e66788ac3213f4189fd6d49eb137d fb5f0343694ff099d26f408ccb02c199901b58111f3e21cf3de3c23a210d7b6d]
	I0803 23:45:06.762267 1389119 ssh_runner.go:195] Run: which crictl
	I0803 23:45:06.765974 1389119 ssh_runner.go:195] Run: which crictl
	I0803 23:45:06.769653 1389119 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0803 23:45:06.769726 1389119 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0803 23:45:06.807389 1389119 cri.go:89] found id: "5cbe8058c6723a2d2086539e5d8807f73a64893442fe8bb140d579ee4aececc6"
	I0803 23:45:06.807413 1389119 cri.go:89] found id: "17af5d5b36407e3b6155abcf2050b778e7fba88e3816192cc23567d11a94cc3e"
	I0803 23:45:06.807418 1389119 cri.go:89] found id: ""
	I0803 23:45:06.807426 1389119 logs.go:276] 2 containers: [5cbe8058c6723a2d2086539e5d8807f73a64893442fe8bb140d579ee4aececc6 17af5d5b36407e3b6155abcf2050b778e7fba88e3816192cc23567d11a94cc3e]
	I0803 23:45:06.807482 1389119 ssh_runner.go:195] Run: which crictl
	I0803 23:45:06.811022 1389119 ssh_runner.go:195] Run: which crictl
	I0803 23:45:06.814520 1389119 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0803 23:45:06.814593 1389119 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0803 23:45:06.856056 1389119 cri.go:89] found id: "b8f07cb083173520f0df79356cae5f287397885843c428a87247794fa9b0ad02"
	I0803 23:45:06.856130 1389119 cri.go:89] found id: "1fb74e635c11f9e357abcbe302bff95b9c25381dfe33de15354e0eedbc5564f4"
	I0803 23:45:06.856150 1389119 cri.go:89] found id: ""
	I0803 23:45:06.856174 1389119 logs.go:276] 2 containers: [b8f07cb083173520f0df79356cae5f287397885843c428a87247794fa9b0ad02 1fb74e635c11f9e357abcbe302bff95b9c25381dfe33de15354e0eedbc5564f4]
	I0803 23:45:06.856257 1389119 ssh_runner.go:195] Run: which crictl
	I0803 23:45:06.860079 1389119 ssh_runner.go:195] Run: which crictl
	I0803 23:45:06.863499 1389119 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0803 23:45:06.863594 1389119 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0803 23:45:06.905512 1389119 cri.go:89] found id: "9bf21e77a35b82c0eff0d1588e963d3c44c43c2c26455b9d13e89d93c07f4a7a"
	I0803 23:45:06.905534 1389119 cri.go:89] found id: "1cf5b72b69cb46a2f0ebd44d87f295610e0a42f9bfb29e2fb3e863c1645de1b9"
	I0803 23:45:06.905539 1389119 cri.go:89] found id: ""
	I0803 23:45:06.905545 1389119 logs.go:276] 2 containers: [9bf21e77a35b82c0eff0d1588e963d3c44c43c2c26455b9d13e89d93c07f4a7a 1cf5b72b69cb46a2f0ebd44d87f295610e0a42f9bfb29e2fb3e863c1645de1b9]
	I0803 23:45:06.905622 1389119 ssh_runner.go:195] Run: which crictl
	I0803 23:45:06.909250 1389119 ssh_runner.go:195] Run: which crictl
	I0803 23:45:06.912616 1389119 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0803 23:45:06.912745 1389119 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0803 23:45:06.949355 1389119 cri.go:89] found id: "aceae907dc5dd357ef34d25fba3386bee6214298dda34f31bb744dd99aaf02c5"
	I0803 23:45:06.949379 1389119 cri.go:89] found id: "9d8357e9b98e5a01f551902057365a6f43c698ef3ffda88640f432be5ca342dc"
	I0803 23:45:06.949385 1389119 cri.go:89] found id: ""
	I0803 23:45:06.949392 1389119 logs.go:276] 2 containers: [aceae907dc5dd357ef34d25fba3386bee6214298dda34f31bb744dd99aaf02c5 9d8357e9b98e5a01f551902057365a6f43c698ef3ffda88640f432be5ca342dc]
	I0803 23:45:06.949477 1389119 ssh_runner.go:195] Run: which crictl
	I0803 23:45:06.953258 1389119 ssh_runner.go:195] Run: which crictl
	I0803 23:45:06.957005 1389119 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0803 23:45:06.957132 1389119 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0803 23:45:06.994111 1389119 cri.go:89] found id: "decd46f1eb153a03a912739b52c5220cab3bc0a68a41ce2f501a19b1547d030a"
	I0803 23:45:06.994136 1389119 cri.go:89] found id: "e15be75c9111b317b4e56baaf7f93bc458c9080032ce09cbdd0e3fa77a17cb5e"
	I0803 23:45:06.994141 1389119 cri.go:89] found id: ""
	I0803 23:45:06.994148 1389119 logs.go:276] 2 containers: [decd46f1eb153a03a912739b52c5220cab3bc0a68a41ce2f501a19b1547d030a e15be75c9111b317b4e56baaf7f93bc458c9080032ce09cbdd0e3fa77a17cb5e]
	I0803 23:45:06.994205 1389119 ssh_runner.go:195] Run: which crictl
	I0803 23:45:06.998167 1389119 ssh_runner.go:195] Run: which crictl
	I0803 23:45:07.003669 1389119 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0803 23:45:07.003788 1389119 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0803 23:45:07.050511 1389119 cri.go:89] found id: "e8b4079036ee954a62ef0fc54baf2b399cac2f40b9f5d82804b0a0b63287f659"
	I0803 23:45:07.050571 1389119 cri.go:89] found id: "0d22eec037f1c7aab116e4aa78c05368658c4097a76b252c826555e1a25f0d34"
	I0803 23:45:07.050591 1389119 cri.go:89] found id: ""
	I0803 23:45:07.050605 1389119 logs.go:276] 2 containers: [e8b4079036ee954a62ef0fc54baf2b399cac2f40b9f5d82804b0a0b63287f659 0d22eec037f1c7aab116e4aa78c05368658c4097a76b252c826555e1a25f0d34]
	I0803 23:45:07.050661 1389119 ssh_runner.go:195] Run: which crictl
	I0803 23:45:07.054359 1389119 ssh_runner.go:195] Run: which crictl
	I0803 23:45:07.058014 1389119 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0803 23:45:07.058119 1389119 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0803 23:45:07.096089 1389119 cri.go:89] found id: "32cdc2b56b59662ee956a5208730f2f361969d0adbd1439929e7e6c099a03838"
	I0803 23:45:07.096114 1389119 cri.go:89] found id: ""
	I0803 23:45:07.096122 1389119 logs.go:276] 1 containers: [32cdc2b56b59662ee956a5208730f2f361969d0adbd1439929e7e6c099a03838]
	I0803 23:45:07.096176 1389119 ssh_runner.go:195] Run: which crictl
	I0803 23:45:07.099595 1389119 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0803 23:45:07.099711 1389119 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0803 23:45:07.162961 1389119 cri.go:89] found id: "fe656919428c126fa2c0d30d5ab782825c967bc723dfa769843ecf5bba788898"
	I0803 23:45:07.163028 1389119 cri.go:89] found id: "9d49e481b2b6243d5e9de51f4fcc1d6d8cb677348b5fd9254200807f07c7297e"
	I0803 23:45:07.163047 1389119 cri.go:89] found id: ""
	I0803 23:45:07.163070 1389119 logs.go:276] 2 containers: [fe656919428c126fa2c0d30d5ab782825c967bc723dfa769843ecf5bba788898 9d49e481b2b6243d5e9de51f4fcc1d6d8cb677348b5fd9254200807f07c7297e]
	I0803 23:45:07.163166 1389119 ssh_runner.go:195] Run: which crictl
	I0803 23:45:07.166813 1389119 ssh_runner.go:195] Run: which crictl
	I0803 23:45:07.170591 1389119 logs.go:123] Gathering logs for kubelet ...
	I0803 23:45:07.170624 1389119 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0803 23:45:07.229954 1389119 logs.go:138] Found kubelet problem: Aug 03 23:39:40 old-k8s-version-820414 kubelet[663]: E0803 23:39:40.459162     663 reflector.go:138] object-"kube-system"/"coredns-token-9d2xv": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "coredns-token-9d2xv" is forbidden: User "system:node:old-k8s-version-820414" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-820414' and this object
	W0803 23:45:07.230178 1389119 logs.go:138] Found kubelet problem: Aug 03 23:39:40 old-k8s-version-820414 kubelet[663]: E0803 23:39:40.459309     663 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:old-k8s-version-820414" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-820414' and this object
	W0803 23:45:07.230387 1389119 logs.go:138] Found kubelet problem: Aug 03 23:39:40 old-k8s-version-820414 kubelet[663]: E0803 23:39:40.459518     663 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:old-k8s-version-820414" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-820414' and this object
	W0803 23:45:07.230605 1389119 logs.go:138] Found kubelet problem: Aug 03 23:39:40 old-k8s-version-820414 kubelet[663]: E0803 23:39:40.459607     663 reflector.go:138] object-"kube-system"/"kube-proxy-token-mfhhp": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kube-proxy-token-mfhhp" is forbidden: User "system:node:old-k8s-version-820414" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-820414' and this object
	W0803 23:45:07.230820 1389119 logs.go:138] Found kubelet problem: Aug 03 23:39:40 old-k8s-version-820414 kubelet[663]: E0803 23:39:40.459825     663 reflector.go:138] object-"kube-system"/"kindnet-token-ghz8l": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kindnet-token-ghz8l" is forbidden: User "system:node:old-k8s-version-820414" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-820414' and this object
	W0803 23:45:07.231056 1389119 logs.go:138] Found kubelet problem: Aug 03 23:39:40 old-k8s-version-820414 kubelet[663]: E0803 23:39:40.470624     663 reflector.go:138] object-"kube-system"/"storage-provisioner-token-xnstr": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "storage-provisioner-token-xnstr" is forbidden: User "system:node:old-k8s-version-820414" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-820414' and this object
	W0803 23:45:07.231267 1389119 logs.go:138] Found kubelet problem: Aug 03 23:39:40 old-k8s-version-820414 kubelet[663]: E0803 23:39:40.470708     663 reflector.go:138] object-"default"/"default-token-2m78r": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "default-token-2m78r" is forbidden: User "system:node:old-k8s-version-820414" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'old-k8s-version-820414' and this object
	W0803 23:45:07.231489 1389119 logs.go:138] Found kubelet problem: Aug 03 23:39:40 old-k8s-version-820414 kubelet[663]: E0803 23:39:40.470783     663 reflector.go:138] object-"kube-system"/"metrics-server-token-nxdwh": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "metrics-server-token-nxdwh" is forbidden: User "system:node:old-k8s-version-820414" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-820414' and this object
	W0803 23:45:07.239211 1389119 logs.go:138] Found kubelet problem: Aug 03 23:39:42 old-k8s-version-820414 kubelet[663]: E0803 23:39:42.798080     663 pod_workers.go:191] Error syncing pod fb610adc-a764-488d-83dd-61cc38a45f5f ("metrics-server-9975d5f86-wm57q_kube-system(fb610adc-a764-488d-83dd-61cc38a45f5f)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0803 23:45:07.239402 1389119 logs.go:138] Found kubelet problem: Aug 03 23:39:42 old-k8s-version-820414 kubelet[663]: E0803 23:39:42.847187     663 pod_workers.go:191] Error syncing pod fb610adc-a764-488d-83dd-61cc38a45f5f ("metrics-server-9975d5f86-wm57q_kube-system(fb610adc-a764-488d-83dd-61cc38a45f5f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0803 23:45:07.242987 1389119 logs.go:138] Found kubelet problem: Aug 03 23:39:55 old-k8s-version-820414 kubelet[663]: E0803 23:39:55.456344     663 pod_workers.go:191] Error syncing pod fb610adc-a764-488d-83dd-61cc38a45f5f ("metrics-server-9975d5f86-wm57q_kube-system(fb610adc-a764-488d-83dd-61cc38a45f5f)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0803 23:45:07.244679 1389119 logs.go:138] Found kubelet problem: Aug 03 23:40:09 old-k8s-version-820414 kubelet[663]: E0803 23:40:09.485277     663 pod_workers.go:191] Error syncing pod fb610adc-a764-488d-83dd-61cc38a45f5f ("metrics-server-9975d5f86-wm57q_kube-system(fb610adc-a764-488d-83dd-61cc38a45f5f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0803 23:45:07.245677 1389119 logs.go:138] Found kubelet problem: Aug 03 23:40:12 old-k8s-version-820414 kubelet[663]: E0803 23:40:12.189775     663 pod_workers.go:191] Error syncing pod ffd039c1-683a-48d6-aaaa-75fd4714221a ("dashboard-metrics-scraper-8d5bb5db8-lfzsx_kubernetes-dashboard(ffd039c1-683a-48d6-aaaa-75fd4714221a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-lfzsx_kubernetes-dashboard(ffd039c1-683a-48d6-aaaa-75fd4714221a)"
	W0803 23:45:07.246011 1389119 logs.go:138] Found kubelet problem: Aug 03 23:40:13 old-k8s-version-820414 kubelet[663]: E0803 23:40:13.184996     663 pod_workers.go:191] Error syncing pod ffd039c1-683a-48d6-aaaa-75fd4714221a ("dashboard-metrics-scraper-8d5bb5db8-lfzsx_kubernetes-dashboard(ffd039c1-683a-48d6-aaaa-75fd4714221a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-lfzsx_kubernetes-dashboard(ffd039c1-683a-48d6-aaaa-75fd4714221a)"
	W0803 23:45:07.246477 1389119 logs.go:138] Found kubelet problem: Aug 03 23:40:15 old-k8s-version-820414 kubelet[663]: E0803 23:40:15.194567     663 pod_workers.go:191] Error syncing pod 760afa3c-130b-47d5-a942-ae27ff7ac5f5 ("storage-provisioner_kube-system(760afa3c-130b-47d5-a942-ae27ff7ac5f5)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(760afa3c-130b-47d5-a942-ae27ff7ac5f5)"
	W0803 23:45:07.246820 1389119 logs.go:138] Found kubelet problem: Aug 03 23:40:16 old-k8s-version-820414 kubelet[663]: E0803 23:40:16.619029     663 pod_workers.go:191] Error syncing pod ffd039c1-683a-48d6-aaaa-75fd4714221a ("dashboard-metrics-scraper-8d5bb5db8-lfzsx_kubernetes-dashboard(ffd039c1-683a-48d6-aaaa-75fd4714221a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-lfzsx_kubernetes-dashboard(ffd039c1-683a-48d6-aaaa-75fd4714221a)"
	W0803 23:45:07.249665 1389119 logs.go:138] Found kubelet problem: Aug 03 23:40:23 old-k8s-version-820414 kubelet[663]: E0803 23:40:23.484273     663 pod_workers.go:191] Error syncing pod fb610adc-a764-488d-83dd-61cc38a45f5f ("metrics-server-9975d5f86-wm57q_kube-system(fb610adc-a764-488d-83dd-61cc38a45f5f)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0803 23:45:07.250393 1389119 logs.go:138] Found kubelet problem: Aug 03 23:40:30 old-k8s-version-820414 kubelet[663]: E0803 23:40:30.262767     663 pod_workers.go:191] Error syncing pod ffd039c1-683a-48d6-aaaa-75fd4714221a ("dashboard-metrics-scraper-8d5bb5db8-lfzsx_kubernetes-dashboard(ffd039c1-683a-48d6-aaaa-75fd4714221a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-lfzsx_kubernetes-dashboard(ffd039c1-683a-48d6-aaaa-75fd4714221a)"
	W0803 23:45:07.250579 1389119 logs.go:138] Found kubelet problem: Aug 03 23:40:36 old-k8s-version-820414 kubelet[663]: E0803 23:40:36.524198     663 pod_workers.go:191] Error syncing pod fb610adc-a764-488d-83dd-61cc38a45f5f ("metrics-server-9975d5f86-wm57q_kube-system(fb610adc-a764-488d-83dd-61cc38a45f5f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0803 23:45:07.250908 1389119 logs.go:138] Found kubelet problem: Aug 03 23:40:36 old-k8s-version-820414 kubelet[663]: E0803 23:40:36.615532     663 pod_workers.go:191] Error syncing pod ffd039c1-683a-48d6-aaaa-75fd4714221a ("dashboard-metrics-scraper-8d5bb5db8-lfzsx_kubernetes-dashboard(ffd039c1-683a-48d6-aaaa-75fd4714221a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-lfzsx_kubernetes-dashboard(ffd039c1-683a-48d6-aaaa-75fd4714221a)"
	W0803 23:45:07.251237 1389119 logs.go:138] Found kubelet problem: Aug 03 23:40:48 old-k8s-version-820414 kubelet[663]: E0803 23:40:48.444598     663 pod_workers.go:191] Error syncing pod ffd039c1-683a-48d6-aaaa-75fd4714221a ("dashboard-metrics-scraper-8d5bb5db8-lfzsx_kubernetes-dashboard(ffd039c1-683a-48d6-aaaa-75fd4714221a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-lfzsx_kubernetes-dashboard(ffd039c1-683a-48d6-aaaa-75fd4714221a)"
	W0803 23:45:07.251425 1389119 logs.go:138] Found kubelet problem: Aug 03 23:40:49 old-k8s-version-820414 kubelet[663]: E0803 23:40:49.445255     663 pod_workers.go:191] Error syncing pod fb610adc-a764-488d-83dd-61cc38a45f5f ("metrics-server-9975d5f86-wm57q_kube-system(fb610adc-a764-488d-83dd-61cc38a45f5f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0803 23:45:07.251622 1389119 logs.go:138] Found kubelet problem: Aug 03 23:41:03 old-k8s-version-820414 kubelet[663]: E0803 23:41:03.444496     663 pod_workers.go:191] Error syncing pod fb610adc-a764-488d-83dd-61cc38a45f5f ("metrics-server-9975d5f86-wm57q_kube-system(fb610adc-a764-488d-83dd-61cc38a45f5f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0803 23:45:07.252220 1389119 logs.go:138] Found kubelet problem: Aug 03 23:41:04 old-k8s-version-820414 kubelet[663]: E0803 23:41:04.368191     663 pod_workers.go:191] Error syncing pod ffd039c1-683a-48d6-aaaa-75fd4714221a ("dashboard-metrics-scraper-8d5bb5db8-lfzsx_kubernetes-dashboard(ffd039c1-683a-48d6-aaaa-75fd4714221a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-lfzsx_kubernetes-dashboard(ffd039c1-683a-48d6-aaaa-75fd4714221a)"
	W0803 23:45:07.252575 1389119 logs.go:138] Found kubelet problem: Aug 03 23:41:06 old-k8s-version-820414 kubelet[663]: E0803 23:41:06.609272     663 pod_workers.go:191] Error syncing pod ffd039c1-683a-48d6-aaaa-75fd4714221a ("dashboard-metrics-scraper-8d5bb5db8-lfzsx_kubernetes-dashboard(ffd039c1-683a-48d6-aaaa-75fd4714221a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-lfzsx_kubernetes-dashboard(ffd039c1-683a-48d6-aaaa-75fd4714221a)"
	W0803 23:45:07.255047 1389119 logs.go:138] Found kubelet problem: Aug 03 23:41:15 old-k8s-version-820414 kubelet[663]: E0803 23:41:15.481112     663 pod_workers.go:191] Error syncing pod fb610adc-a764-488d-83dd-61cc38a45f5f ("metrics-server-9975d5f86-wm57q_kube-system(fb610adc-a764-488d-83dd-61cc38a45f5f)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0803 23:45:07.255380 1389119 logs.go:138] Found kubelet problem: Aug 03 23:41:20 old-k8s-version-820414 kubelet[663]: E0803 23:41:20.444068     663 pod_workers.go:191] Error syncing pod ffd039c1-683a-48d6-aaaa-75fd4714221a ("dashboard-metrics-scraper-8d5bb5db8-lfzsx_kubernetes-dashboard(ffd039c1-683a-48d6-aaaa-75fd4714221a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-lfzsx_kubernetes-dashboard(ffd039c1-683a-48d6-aaaa-75fd4714221a)"
	W0803 23:45:07.255574 1389119 logs.go:138] Found kubelet problem: Aug 03 23:41:26 old-k8s-version-820414 kubelet[663]: E0803 23:41:26.444477     663 pod_workers.go:191] Error syncing pod fb610adc-a764-488d-83dd-61cc38a45f5f ("metrics-server-9975d5f86-wm57q_kube-system(fb610adc-a764-488d-83dd-61cc38a45f5f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0803 23:45:07.255904 1389119 logs.go:138] Found kubelet problem: Aug 03 23:41:35 old-k8s-version-820414 kubelet[663]: E0803 23:41:35.444186     663 pod_workers.go:191] Error syncing pod ffd039c1-683a-48d6-aaaa-75fd4714221a ("dashboard-metrics-scraper-8d5bb5db8-lfzsx_kubernetes-dashboard(ffd039c1-683a-48d6-aaaa-75fd4714221a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-lfzsx_kubernetes-dashboard(ffd039c1-683a-48d6-aaaa-75fd4714221a)"
	W0803 23:45:07.256112 1389119 logs.go:138] Found kubelet problem: Aug 03 23:41:38 old-k8s-version-820414 kubelet[663]: E0803 23:41:38.444525     663 pod_workers.go:191] Error syncing pod fb610adc-a764-488d-83dd-61cc38a45f5f ("metrics-server-9975d5f86-wm57q_kube-system(fb610adc-a764-488d-83dd-61cc38a45f5f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0803 23:45:07.256714 1389119 logs.go:138] Found kubelet problem: Aug 03 23:41:47 old-k8s-version-820414 kubelet[663]: E0803 23:41:47.552369     663 pod_workers.go:191] Error syncing pod ffd039c1-683a-48d6-aaaa-75fd4714221a ("dashboard-metrics-scraper-8d5bb5db8-lfzsx_kubernetes-dashboard(ffd039c1-683a-48d6-aaaa-75fd4714221a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-lfzsx_kubernetes-dashboard(ffd039c1-683a-48d6-aaaa-75fd4714221a)"
	W0803 23:45:07.256913 1389119 logs.go:138] Found kubelet problem: Aug 03 23:41:52 old-k8s-version-820414 kubelet[663]: E0803 23:41:52.444351     663 pod_workers.go:191] Error syncing pod fb610adc-a764-488d-83dd-61cc38a45f5f ("metrics-server-9975d5f86-wm57q_kube-system(fb610adc-a764-488d-83dd-61cc38a45f5f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0803 23:45:07.257247 1389119 logs.go:138] Found kubelet problem: Aug 03 23:41:56 old-k8s-version-820414 kubelet[663]: E0803 23:41:56.604520     663 pod_workers.go:191] Error syncing pod ffd039c1-683a-48d6-aaaa-75fd4714221a ("dashboard-metrics-scraper-8d5bb5db8-lfzsx_kubernetes-dashboard(ffd039c1-683a-48d6-aaaa-75fd4714221a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-lfzsx_kubernetes-dashboard(ffd039c1-683a-48d6-aaaa-75fd4714221a)"
	W0803 23:45:07.257437 1389119 logs.go:138] Found kubelet problem: Aug 03 23:42:07 old-k8s-version-820414 kubelet[663]: E0803 23:42:07.444300     663 pod_workers.go:191] Error syncing pod fb610adc-a764-488d-83dd-61cc38a45f5f ("metrics-server-9975d5f86-wm57q_kube-system(fb610adc-a764-488d-83dd-61cc38a45f5f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0803 23:45:07.257772 1389119 logs.go:138] Found kubelet problem: Aug 03 23:42:11 old-k8s-version-820414 kubelet[663]: E0803 23:42:11.444597     663 pod_workers.go:191] Error syncing pod ffd039c1-683a-48d6-aaaa-75fd4714221a ("dashboard-metrics-scraper-8d5bb5db8-lfzsx_kubernetes-dashboard(ffd039c1-683a-48d6-aaaa-75fd4714221a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-lfzsx_kubernetes-dashboard(ffd039c1-683a-48d6-aaaa-75fd4714221a)"
	W0803 23:45:07.258088 1389119 logs.go:138] Found kubelet problem: Aug 03 23:42:22 old-k8s-version-820414 kubelet[663]: E0803 23:42:22.444598     663 pod_workers.go:191] Error syncing pod fb610adc-a764-488d-83dd-61cc38a45f5f ("metrics-server-9975d5f86-wm57q_kube-system(fb610adc-a764-488d-83dd-61cc38a45f5f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0803 23:45:07.258287 1389119 logs.go:138] Found kubelet problem: Aug 03 23:42:22 old-k8s-version-820414 kubelet[663]: E0803 23:42:22.444840     663 pod_workers.go:191] Error syncing pod ffd039c1-683a-48d6-aaaa-75fd4714221a ("dashboard-metrics-scraper-8d5bb5db8-lfzsx_kubernetes-dashboard(ffd039c1-683a-48d6-aaaa-75fd4714221a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-lfzsx_kubernetes-dashboard(ffd039c1-683a-48d6-aaaa-75fd4714221a)"
	W0803 23:45:07.258626 1389119 logs.go:138] Found kubelet problem: Aug 03 23:42:34 old-k8s-version-820414 kubelet[663]: E0803 23:42:34.444102     663 pod_workers.go:191] Error syncing pod ffd039c1-683a-48d6-aaaa-75fd4714221a ("dashboard-metrics-scraper-8d5bb5db8-lfzsx_kubernetes-dashboard(ffd039c1-683a-48d6-aaaa-75fd4714221a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-lfzsx_kubernetes-dashboard(ffd039c1-683a-48d6-aaaa-75fd4714221a)"
	W0803 23:45:07.261093 1389119 logs.go:138] Found kubelet problem: Aug 03 23:42:37 old-k8s-version-820414 kubelet[663]: E0803 23:42:37.453580     663 pod_workers.go:191] Error syncing pod fb610adc-a764-488d-83dd-61cc38a45f5f ("metrics-server-9975d5f86-wm57q_kube-system(fb610adc-a764-488d-83dd-61cc38a45f5f)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0803 23:45:07.261430 1389119 logs.go:138] Found kubelet problem: Aug 03 23:42:49 old-k8s-version-820414 kubelet[663]: E0803 23:42:49.444596     663 pod_workers.go:191] Error syncing pod ffd039c1-683a-48d6-aaaa-75fd4714221a ("dashboard-metrics-scraper-8d5bb5db8-lfzsx_kubernetes-dashboard(ffd039c1-683a-48d6-aaaa-75fd4714221a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-lfzsx_kubernetes-dashboard(ffd039c1-683a-48d6-aaaa-75fd4714221a)"
	W0803 23:45:07.261616 1389119 logs.go:138] Found kubelet problem: Aug 03 23:42:51 old-k8s-version-820414 kubelet[663]: E0803 23:42:51.445065     663 pod_workers.go:191] Error syncing pod fb610adc-a764-488d-83dd-61cc38a45f5f ("metrics-server-9975d5f86-wm57q_kube-system(fb610adc-a764-488d-83dd-61cc38a45f5f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0803 23:45:07.261945 1389119 logs.go:138] Found kubelet problem: Aug 03 23:43:02 old-k8s-version-820414 kubelet[663]: E0803 23:43:02.444071     663 pod_workers.go:191] Error syncing pod ffd039c1-683a-48d6-aaaa-75fd4714221a ("dashboard-metrics-scraper-8d5bb5db8-lfzsx_kubernetes-dashboard(ffd039c1-683a-48d6-aaaa-75fd4714221a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-lfzsx_kubernetes-dashboard(ffd039c1-683a-48d6-aaaa-75fd4714221a)"
	W0803 23:45:07.262132 1389119 logs.go:138] Found kubelet problem: Aug 03 23:43:05 old-k8s-version-820414 kubelet[663]: E0803 23:43:05.444843     663 pod_workers.go:191] Error syncing pod fb610adc-a764-488d-83dd-61cc38a45f5f ("metrics-server-9975d5f86-wm57q_kube-system(fb610adc-a764-488d-83dd-61cc38a45f5f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0803 23:45:07.262727 1389119 logs.go:138] Found kubelet problem: Aug 03 23:43:16 old-k8s-version-820414 kubelet[663]: E0803 23:43:16.774818     663 pod_workers.go:191] Error syncing pod ffd039c1-683a-48d6-aaaa-75fd4714221a ("dashboard-metrics-scraper-8d5bb5db8-lfzsx_kubernetes-dashboard(ffd039c1-683a-48d6-aaaa-75fd4714221a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-lfzsx_kubernetes-dashboard(ffd039c1-683a-48d6-aaaa-75fd4714221a)"
	W0803 23:45:07.262915 1389119 logs.go:138] Found kubelet problem: Aug 03 23:43:18 old-k8s-version-820414 kubelet[663]: E0803 23:43:18.444453     663 pod_workers.go:191] Error syncing pod fb610adc-a764-488d-83dd-61cc38a45f5f ("metrics-server-9975d5f86-wm57q_kube-system(fb610adc-a764-488d-83dd-61cc38a45f5f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0803 23:45:07.263246 1389119 logs.go:138] Found kubelet problem: Aug 03 23:43:26 old-k8s-version-820414 kubelet[663]: E0803 23:43:26.604545     663 pod_workers.go:191] Error syncing pod ffd039c1-683a-48d6-aaaa-75fd4714221a ("dashboard-metrics-scraper-8d5bb5db8-lfzsx_kubernetes-dashboard(ffd039c1-683a-48d6-aaaa-75fd4714221a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-lfzsx_kubernetes-dashboard(ffd039c1-683a-48d6-aaaa-75fd4714221a)"
	W0803 23:45:07.263433 1389119 logs.go:138] Found kubelet problem: Aug 03 23:43:29 old-k8s-version-820414 kubelet[663]: E0803 23:43:29.445577     663 pod_workers.go:191] Error syncing pod fb610adc-a764-488d-83dd-61cc38a45f5f ("metrics-server-9975d5f86-wm57q_kube-system(fb610adc-a764-488d-83dd-61cc38a45f5f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0803 23:45:07.263764 1389119 logs.go:138] Found kubelet problem: Aug 03 23:43:41 old-k8s-version-820414 kubelet[663]: E0803 23:43:41.444779     663 pod_workers.go:191] Error syncing pod ffd039c1-683a-48d6-aaaa-75fd4714221a ("dashboard-metrics-scraper-8d5bb5db8-lfzsx_kubernetes-dashboard(ffd039c1-683a-48d6-aaaa-75fd4714221a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-lfzsx_kubernetes-dashboard(ffd039c1-683a-48d6-aaaa-75fd4714221a)"
	W0803 23:45:07.263951 1389119 logs.go:138] Found kubelet problem: Aug 03 23:43:44 old-k8s-version-820414 kubelet[663]: E0803 23:43:44.444419     663 pod_workers.go:191] Error syncing pod fb610adc-a764-488d-83dd-61cc38a45f5f ("metrics-server-9975d5f86-wm57q_kube-system(fb610adc-a764-488d-83dd-61cc38a45f5f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0803 23:45:07.264281 1389119 logs.go:138] Found kubelet problem: Aug 03 23:43:56 old-k8s-version-820414 kubelet[663]: E0803 23:43:56.444259     663 pod_workers.go:191] Error syncing pod ffd039c1-683a-48d6-aaaa-75fd4714221a ("dashboard-metrics-scraper-8d5bb5db8-lfzsx_kubernetes-dashboard(ffd039c1-683a-48d6-aaaa-75fd4714221a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-lfzsx_kubernetes-dashboard(ffd039c1-683a-48d6-aaaa-75fd4714221a)"
	W0803 23:45:07.264467 1389119 logs.go:138] Found kubelet problem: Aug 03 23:43:58 old-k8s-version-820414 kubelet[663]: E0803 23:43:58.444353     663 pod_workers.go:191] Error syncing pod fb610adc-a764-488d-83dd-61cc38a45f5f ("metrics-server-9975d5f86-wm57q_kube-system(fb610adc-a764-488d-83dd-61cc38a45f5f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0803 23:45:07.264806 1389119 logs.go:138] Found kubelet problem: Aug 03 23:44:08 old-k8s-version-820414 kubelet[663]: E0803 23:44:08.444136     663 pod_workers.go:191] Error syncing pod ffd039c1-683a-48d6-aaaa-75fd4714221a ("dashboard-metrics-scraper-8d5bb5db8-lfzsx_kubernetes-dashboard(ffd039c1-683a-48d6-aaaa-75fd4714221a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-lfzsx_kubernetes-dashboard(ffd039c1-683a-48d6-aaaa-75fd4714221a)"
	W0803 23:45:07.264996 1389119 logs.go:138] Found kubelet problem: Aug 03 23:44:12 old-k8s-version-820414 kubelet[663]: E0803 23:44:12.444406     663 pod_workers.go:191] Error syncing pod fb610adc-a764-488d-83dd-61cc38a45f5f ("metrics-server-9975d5f86-wm57q_kube-system(fb610adc-a764-488d-83dd-61cc38a45f5f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0803 23:45:07.265328 1389119 logs.go:138] Found kubelet problem: Aug 03 23:44:22 old-k8s-version-820414 kubelet[663]: E0803 23:44:22.444030     663 pod_workers.go:191] Error syncing pod ffd039c1-683a-48d6-aaaa-75fd4714221a ("dashboard-metrics-scraper-8d5bb5db8-lfzsx_kubernetes-dashboard(ffd039c1-683a-48d6-aaaa-75fd4714221a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-lfzsx_kubernetes-dashboard(ffd039c1-683a-48d6-aaaa-75fd4714221a)"
	W0803 23:45:07.265513 1389119 logs.go:138] Found kubelet problem: Aug 03 23:44:23 old-k8s-version-820414 kubelet[663]: E0803 23:44:23.449169     663 pod_workers.go:191] Error syncing pod fb610adc-a764-488d-83dd-61cc38a45f5f ("metrics-server-9975d5f86-wm57q_kube-system(fb610adc-a764-488d-83dd-61cc38a45f5f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0803 23:45:07.265698 1389119 logs.go:138] Found kubelet problem: Aug 03 23:44:35 old-k8s-version-820414 kubelet[663]: E0803 23:44:35.445923     663 pod_workers.go:191] Error syncing pod fb610adc-a764-488d-83dd-61cc38a45f5f ("metrics-server-9975d5f86-wm57q_kube-system(fb610adc-a764-488d-83dd-61cc38a45f5f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0803 23:45:07.266028 1389119 logs.go:138] Found kubelet problem: Aug 03 23:44:37 old-k8s-version-820414 kubelet[663]: E0803 23:44:37.444223     663 pod_workers.go:191] Error syncing pod ffd039c1-683a-48d6-aaaa-75fd4714221a ("dashboard-metrics-scraper-8d5bb5db8-lfzsx_kubernetes-dashboard(ffd039c1-683a-48d6-aaaa-75fd4714221a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-lfzsx_kubernetes-dashboard(ffd039c1-683a-48d6-aaaa-75fd4714221a)"
	W0803 23:45:07.266363 1389119 logs.go:138] Found kubelet problem: Aug 03 23:44:50 old-k8s-version-820414 kubelet[663]: E0803 23:44:50.444155     663 pod_workers.go:191] Error syncing pod ffd039c1-683a-48d6-aaaa-75fd4714221a ("dashboard-metrics-scraper-8d5bb5db8-lfzsx_kubernetes-dashboard(ffd039c1-683a-48d6-aaaa-75fd4714221a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-lfzsx_kubernetes-dashboard(ffd039c1-683a-48d6-aaaa-75fd4714221a)"
	W0803 23:45:07.266550 1389119 logs.go:138] Found kubelet problem: Aug 03 23:44:50 old-k8s-version-820414 kubelet[663]: E0803 23:44:50.445023     663 pod_workers.go:191] Error syncing pod fb610adc-a764-488d-83dd-61cc38a45f5f ("metrics-server-9975d5f86-wm57q_kube-system(fb610adc-a764-488d-83dd-61cc38a45f5f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0803 23:45:07.266737 1389119 logs.go:138] Found kubelet problem: Aug 03 23:45:02 old-k8s-version-820414 kubelet[663]: E0803 23:45:02.444334     663 pod_workers.go:191] Error syncing pod fb610adc-a764-488d-83dd-61cc38a45f5f ("metrics-server-9975d5f86-wm57q_kube-system(fb610adc-a764-488d-83dd-61cc38a45f5f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0803 23:45:07.267072 1389119 logs.go:138] Found kubelet problem: Aug 03 23:45:04 old-k8s-version-820414 kubelet[663]: E0803 23:45:04.444530     663 pod_workers.go:191] Error syncing pod ffd039c1-683a-48d6-aaaa-75fd4714221a ("dashboard-metrics-scraper-8d5bb5db8-lfzsx_kubernetes-dashboard(ffd039c1-683a-48d6-aaaa-75fd4714221a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-lfzsx_kubernetes-dashboard(ffd039c1-683a-48d6-aaaa-75fd4714221a)"
	I0803 23:45:07.267087 1389119 logs.go:123] Gathering logs for dmesg ...
	I0803 23:45:07.267104 1389119 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0803 23:45:07.288852 1389119 logs.go:123] Gathering logs for kube-apiserver [fb5f0343694ff099d26f408ccb02c199901b58111f3e21cf3de3c23a210d7b6d] ...
	I0803 23:45:07.288927 1389119 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fb5f0343694ff099d26f408ccb02c199901b58111f3e21cf3de3c23a210d7b6d"
	I0803 23:45:07.347909 1389119 logs.go:123] Gathering logs for etcd [5cbe8058c6723a2d2086539e5d8807f73a64893442fe8bb140d579ee4aececc6] ...
	I0803 23:45:07.347941 1389119 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5cbe8058c6723a2d2086539e5d8807f73a64893442fe8bb140d579ee4aececc6"
	I0803 23:45:07.391806 1389119 logs.go:123] Gathering logs for kube-proxy [9d8357e9b98e5a01f551902057365a6f43c698ef3ffda88640f432be5ca342dc] ...
	I0803 23:45:07.391836 1389119 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9d8357e9b98e5a01f551902057365a6f43c698ef3ffda88640f432be5ca342dc"
	I0803 23:45:07.437412 1389119 logs.go:123] Gathering logs for kube-controller-manager [decd46f1eb153a03a912739b52c5220cab3bc0a68a41ce2f501a19b1547d030a] ...
	I0803 23:45:07.437441 1389119 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 decd46f1eb153a03a912739b52c5220cab3bc0a68a41ce2f501a19b1547d030a"
	I0803 23:45:07.512477 1389119 logs.go:123] Gathering logs for kindnet [e8b4079036ee954a62ef0fc54baf2b399cac2f40b9f5d82804b0a0b63287f659] ...
	I0803 23:45:07.512513 1389119 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e8b4079036ee954a62ef0fc54baf2b399cac2f40b9f5d82804b0a0b63287f659"
	I0803 23:45:07.574142 1389119 logs.go:123] Gathering logs for kindnet [0d22eec037f1c7aab116e4aa78c05368658c4097a76b252c826555e1a25f0d34] ...
	I0803 23:45:07.574177 1389119 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0d22eec037f1c7aab116e4aa78c05368658c4097a76b252c826555e1a25f0d34"
	I0803 23:45:07.636991 1389119 logs.go:123] Gathering logs for kubernetes-dashboard [32cdc2b56b59662ee956a5208730f2f361969d0adbd1439929e7e6c099a03838] ...
	I0803 23:45:07.637025 1389119 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 32cdc2b56b59662ee956a5208730f2f361969d0adbd1439929e7e6c099a03838"
	I0803 23:45:07.679758 1389119 logs.go:123] Gathering logs for storage-provisioner [9d49e481b2b6243d5e9de51f4fcc1d6d8cb677348b5fd9254200807f07c7297e] ...
	I0803 23:45:07.679829 1389119 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9d49e481b2b6243d5e9de51f4fcc1d6d8cb677348b5fd9254200807f07c7297e"
	I0803 23:45:07.729314 1389119 logs.go:123] Gathering logs for containerd ...
	I0803 23:45:07.729344 1389119 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0803 23:45:07.793803 1389119 logs.go:123] Gathering logs for describe nodes ...
	I0803 23:45:07.793843 1389119 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0803 23:45:07.975479 1389119 logs.go:123] Gathering logs for coredns [1fb74e635c11f9e357abcbe302bff95b9c25381dfe33de15354e0eedbc5564f4] ...
	I0803 23:45:07.975517 1389119 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1fb74e635c11f9e357abcbe302bff95b9c25381dfe33de15354e0eedbc5564f4"
	I0803 23:45:08.018369 1389119 logs.go:123] Gathering logs for kube-scheduler [1cf5b72b69cb46a2f0ebd44d87f295610e0a42f9bfb29e2fb3e863c1645de1b9] ...
	I0803 23:45:08.018398 1389119 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1cf5b72b69cb46a2f0ebd44d87f295610e0a42f9bfb29e2fb3e863c1645de1b9"
	I0803 23:45:08.071822 1389119 logs.go:123] Gathering logs for kube-proxy [aceae907dc5dd357ef34d25fba3386bee6214298dda34f31bb744dd99aaf02c5] ...
	I0803 23:45:08.071855 1389119 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 aceae907dc5dd357ef34d25fba3386bee6214298dda34f31bb744dd99aaf02c5"
	I0803 23:45:08.120580 1389119 logs.go:123] Gathering logs for etcd [17af5d5b36407e3b6155abcf2050b778e7fba88e3816192cc23567d11a94cc3e] ...
	I0803 23:45:08.120607 1389119 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 17af5d5b36407e3b6155abcf2050b778e7fba88e3816192cc23567d11a94cc3e"
	I0803 23:45:08.165131 1389119 logs.go:123] Gathering logs for coredns [b8f07cb083173520f0df79356cae5f287397885843c428a87247794fa9b0ad02] ...
	I0803 23:45:08.165163 1389119 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b8f07cb083173520f0df79356cae5f287397885843c428a87247794fa9b0ad02"
	I0803 23:45:08.204082 1389119 logs.go:123] Gathering logs for kube-controller-manager [e15be75c9111b317b4e56baaf7f93bc458c9080032ce09cbdd0e3fa77a17cb5e] ...
	I0803 23:45:08.204111 1389119 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e15be75c9111b317b4e56baaf7f93bc458c9080032ce09cbdd0e3fa77a17cb5e"
	I0803 23:45:08.278449 1389119 logs.go:123] Gathering logs for kube-apiserver [2dfe55d1797db163cd9199214e216cbd925e66788ac3213f4189fd6d49eb137d] ...
	I0803 23:45:08.278489 1389119 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2dfe55d1797db163cd9199214e216cbd925e66788ac3213f4189fd6d49eb137d"
	I0803 23:45:08.346514 1389119 logs.go:123] Gathering logs for kube-scheduler [9bf21e77a35b82c0eff0d1588e963d3c44c43c2c26455b9d13e89d93c07f4a7a] ...
	I0803 23:45:08.346551 1389119 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9bf21e77a35b82c0eff0d1588e963d3c44c43c2c26455b9d13e89d93c07f4a7a"
	I0803 23:45:08.385202 1389119 logs.go:123] Gathering logs for storage-provisioner [fe656919428c126fa2c0d30d5ab782825c967bc723dfa769843ecf5bba788898] ...
	I0803 23:45:08.385237 1389119 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fe656919428c126fa2c0d30d5ab782825c967bc723dfa769843ecf5bba788898"
	I0803 23:45:08.436943 1389119 logs.go:123] Gathering logs for container status ...
	I0803 23:45:08.436973 1389119 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0803 23:45:08.513063 1389119 out.go:304] Setting ErrFile to fd 2...
	I0803 23:45:08.513090 1389119 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0803 23:45:08.513149 1389119 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0803 23:45:08.513166 1389119 out.go:239]   Aug 03 23:44:37 old-k8s-version-820414 kubelet[663]: E0803 23:44:37.444223     663 pod_workers.go:191] Error syncing pod ffd039c1-683a-48d6-aaaa-75fd4714221a ("dashboard-metrics-scraper-8d5bb5db8-lfzsx_kubernetes-dashboard(ffd039c1-683a-48d6-aaaa-75fd4714221a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-lfzsx_kubernetes-dashboard(ffd039c1-683a-48d6-aaaa-75fd4714221a)"
	  Aug 03 23:44:37 old-k8s-version-820414 kubelet[663]: E0803 23:44:37.444223     663 pod_workers.go:191] Error syncing pod ffd039c1-683a-48d6-aaaa-75fd4714221a ("dashboard-metrics-scraper-8d5bb5db8-lfzsx_kubernetes-dashboard(ffd039c1-683a-48d6-aaaa-75fd4714221a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-lfzsx_kubernetes-dashboard(ffd039c1-683a-48d6-aaaa-75fd4714221a)"
	W0803 23:45:08.513182 1389119 out.go:239]   Aug 03 23:44:50 old-k8s-version-820414 kubelet[663]: E0803 23:44:50.444155     663 pod_workers.go:191] Error syncing pod ffd039c1-683a-48d6-aaaa-75fd4714221a ("dashboard-metrics-scraper-8d5bb5db8-lfzsx_kubernetes-dashboard(ffd039c1-683a-48d6-aaaa-75fd4714221a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-lfzsx_kubernetes-dashboard(ffd039c1-683a-48d6-aaaa-75fd4714221a)"
	  Aug 03 23:44:50 old-k8s-version-820414 kubelet[663]: E0803 23:44:50.444155     663 pod_workers.go:191] Error syncing pod ffd039c1-683a-48d6-aaaa-75fd4714221a ("dashboard-metrics-scraper-8d5bb5db8-lfzsx_kubernetes-dashboard(ffd039c1-683a-48d6-aaaa-75fd4714221a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-lfzsx_kubernetes-dashboard(ffd039c1-683a-48d6-aaaa-75fd4714221a)"
	W0803 23:45:08.513328 1389119 out.go:239]   Aug 03 23:44:50 old-k8s-version-820414 kubelet[663]: E0803 23:44:50.445023     663 pod_workers.go:191] Error syncing pod fb610adc-a764-488d-83dd-61cc38a45f5f ("metrics-server-9975d5f86-wm57q_kube-system(fb610adc-a764-488d-83dd-61cc38a45f5f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	  Aug 03 23:44:50 old-k8s-version-820414 kubelet[663]: E0803 23:44:50.445023     663 pod_workers.go:191] Error syncing pod fb610adc-a764-488d-83dd-61cc38a45f5f ("metrics-server-9975d5f86-wm57q_kube-system(fb610adc-a764-488d-83dd-61cc38a45f5f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0803 23:45:08.513337 1389119 out.go:239]   Aug 03 23:45:02 old-k8s-version-820414 kubelet[663]: E0803 23:45:02.444334     663 pod_workers.go:191] Error syncing pod fb610adc-a764-488d-83dd-61cc38a45f5f ("metrics-server-9975d5f86-wm57q_kube-system(fb610adc-a764-488d-83dd-61cc38a45f5f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	  Aug 03 23:45:02 old-k8s-version-820414 kubelet[663]: E0803 23:45:02.444334     663 pod_workers.go:191] Error syncing pod fb610adc-a764-488d-83dd-61cc38a45f5f ("metrics-server-9975d5f86-wm57q_kube-system(fb610adc-a764-488d-83dd-61cc38a45f5f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0803 23:45:08.513343 1389119 out.go:239]   Aug 03 23:45:04 old-k8s-version-820414 kubelet[663]: E0803 23:45:04.444530     663 pod_workers.go:191] Error syncing pod ffd039c1-683a-48d6-aaaa-75fd4714221a ("dashboard-metrics-scraper-8d5bb5db8-lfzsx_kubernetes-dashboard(ffd039c1-683a-48d6-aaaa-75fd4714221a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-lfzsx_kubernetes-dashboard(ffd039c1-683a-48d6-aaaa-75fd4714221a)"
	  Aug 03 23:45:04 old-k8s-version-820414 kubelet[663]: E0803 23:45:04.444530     663 pod_workers.go:191] Error syncing pod ffd039c1-683a-48d6-aaaa-75fd4714221a ("dashboard-metrics-scraper-8d5bb5db8-lfzsx_kubernetes-dashboard(ffd039c1-683a-48d6-aaaa-75fd4714221a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-lfzsx_kubernetes-dashboard(ffd039c1-683a-48d6-aaaa-75fd4714221a)"
	I0803 23:45:08.513354 1389119 out.go:304] Setting ErrFile to fd 2...
	I0803 23:45:08.513365 1389119 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 23:45:18.514287 1389119 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0803 23:45:18.526473 1389119 api_server.go:72] duration metric: took 5m57.099654038s to wait for apiserver process to appear ...
	I0803 23:45:18.526500 1389119 api_server.go:88] waiting for apiserver healthz status ...
	I0803 23:45:18.526535 1389119 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0803 23:45:18.526600 1389119 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0803 23:45:18.567715 1389119 cri.go:89] found id: "2dfe55d1797db163cd9199214e216cbd925e66788ac3213f4189fd6d49eb137d"
	I0803 23:45:18.567739 1389119 cri.go:89] found id: "fb5f0343694ff099d26f408ccb02c199901b58111f3e21cf3de3c23a210d7b6d"
	I0803 23:45:18.567744 1389119 cri.go:89] found id: ""
	I0803 23:45:18.567751 1389119 logs.go:276] 2 containers: [2dfe55d1797db163cd9199214e216cbd925e66788ac3213f4189fd6d49eb137d fb5f0343694ff099d26f408ccb02c199901b58111f3e21cf3de3c23a210d7b6d]
	I0803 23:45:18.567807 1389119 ssh_runner.go:195] Run: which crictl
	I0803 23:45:18.571380 1389119 ssh_runner.go:195] Run: which crictl
	I0803 23:45:18.574952 1389119 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0803 23:45:18.575024 1389119 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0803 23:45:18.615398 1389119 cri.go:89] found id: "5cbe8058c6723a2d2086539e5d8807f73a64893442fe8bb140d579ee4aececc6"
	I0803 23:45:18.615423 1389119 cri.go:89] found id: "17af5d5b36407e3b6155abcf2050b778e7fba88e3816192cc23567d11a94cc3e"
	I0803 23:45:18.615428 1389119 cri.go:89] found id: ""
	I0803 23:45:18.615436 1389119 logs.go:276] 2 containers: [5cbe8058c6723a2d2086539e5d8807f73a64893442fe8bb140d579ee4aececc6 17af5d5b36407e3b6155abcf2050b778e7fba88e3816192cc23567d11a94cc3e]
	I0803 23:45:18.615491 1389119 ssh_runner.go:195] Run: which crictl
	I0803 23:45:18.619044 1389119 ssh_runner.go:195] Run: which crictl
	I0803 23:45:18.623004 1389119 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0803 23:45:18.623101 1389119 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0803 23:45:18.666718 1389119 cri.go:89] found id: "b8f07cb083173520f0df79356cae5f287397885843c428a87247794fa9b0ad02"
	I0803 23:45:18.666739 1389119 cri.go:89] found id: "1fb74e635c11f9e357abcbe302bff95b9c25381dfe33de15354e0eedbc5564f4"
	I0803 23:45:18.666744 1389119 cri.go:89] found id: ""
	I0803 23:45:18.666751 1389119 logs.go:276] 2 containers: [b8f07cb083173520f0df79356cae5f287397885843c428a87247794fa9b0ad02 1fb74e635c11f9e357abcbe302bff95b9c25381dfe33de15354e0eedbc5564f4]
	I0803 23:45:18.666810 1389119 ssh_runner.go:195] Run: which crictl
	I0803 23:45:18.670661 1389119 ssh_runner.go:195] Run: which crictl
	I0803 23:45:18.674310 1389119 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0803 23:45:18.674385 1389119 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0803 23:45:18.715500 1389119 cri.go:89] found id: "9bf21e77a35b82c0eff0d1588e963d3c44c43c2c26455b9d13e89d93c07f4a7a"
	I0803 23:45:18.715541 1389119 cri.go:89] found id: "1cf5b72b69cb46a2f0ebd44d87f295610e0a42f9bfb29e2fb3e863c1645de1b9"
	I0803 23:45:18.715546 1389119 cri.go:89] found id: ""
	I0803 23:45:18.715553 1389119 logs.go:276] 2 containers: [9bf21e77a35b82c0eff0d1588e963d3c44c43c2c26455b9d13e89d93c07f4a7a 1cf5b72b69cb46a2f0ebd44d87f295610e0a42f9bfb29e2fb3e863c1645de1b9]
	I0803 23:45:18.715616 1389119 ssh_runner.go:195] Run: which crictl
	I0803 23:45:18.719414 1389119 ssh_runner.go:195] Run: which crictl
	I0803 23:45:18.723323 1389119 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0803 23:45:18.723424 1389119 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0803 23:45:18.767590 1389119 cri.go:89] found id: "aceae907dc5dd357ef34d25fba3386bee6214298dda34f31bb744dd99aaf02c5"
	I0803 23:45:18.767614 1389119 cri.go:89] found id: "9d8357e9b98e5a01f551902057365a6f43c698ef3ffda88640f432be5ca342dc"
	I0803 23:45:18.767620 1389119 cri.go:89] found id: ""
	I0803 23:45:18.767627 1389119 logs.go:276] 2 containers: [aceae907dc5dd357ef34d25fba3386bee6214298dda34f31bb744dd99aaf02c5 9d8357e9b98e5a01f551902057365a6f43c698ef3ffda88640f432be5ca342dc]
	I0803 23:45:18.767685 1389119 ssh_runner.go:195] Run: which crictl
	I0803 23:45:18.771782 1389119 ssh_runner.go:195] Run: which crictl
	I0803 23:45:18.775255 1389119 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0803 23:45:18.775365 1389119 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0803 23:45:18.812952 1389119 cri.go:89] found id: "decd46f1eb153a03a912739b52c5220cab3bc0a68a41ce2f501a19b1547d030a"
	I0803 23:45:18.812978 1389119 cri.go:89] found id: "e15be75c9111b317b4e56baaf7f93bc458c9080032ce09cbdd0e3fa77a17cb5e"
	I0803 23:45:18.812984 1389119 cri.go:89] found id: ""
	I0803 23:45:18.812991 1389119 logs.go:276] 2 containers: [decd46f1eb153a03a912739b52c5220cab3bc0a68a41ce2f501a19b1547d030a e15be75c9111b317b4e56baaf7f93bc458c9080032ce09cbdd0e3fa77a17cb5e]
	I0803 23:45:18.813050 1389119 ssh_runner.go:195] Run: which crictl
	I0803 23:45:18.817560 1389119 ssh_runner.go:195] Run: which crictl
	I0803 23:45:18.821261 1389119 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0803 23:45:18.821336 1389119 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0803 23:45:18.874734 1389119 cri.go:89] found id: "e8b4079036ee954a62ef0fc54baf2b399cac2f40b9f5d82804b0a0b63287f659"
	I0803 23:45:18.874797 1389119 cri.go:89] found id: "0d22eec037f1c7aab116e4aa78c05368658c4097a76b252c826555e1a25f0d34"
	I0803 23:45:18.874808 1389119 cri.go:89] found id: ""
	I0803 23:45:18.874815 1389119 logs.go:276] 2 containers: [e8b4079036ee954a62ef0fc54baf2b399cac2f40b9f5d82804b0a0b63287f659 0d22eec037f1c7aab116e4aa78c05368658c4097a76b252c826555e1a25f0d34]
	I0803 23:45:18.874878 1389119 ssh_runner.go:195] Run: which crictl
	I0803 23:45:18.878704 1389119 ssh_runner.go:195] Run: which crictl
	I0803 23:45:18.882687 1389119 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0803 23:45:18.882760 1389119 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0803 23:45:18.930472 1389119 cri.go:89] found id: "32cdc2b56b59662ee956a5208730f2f361969d0adbd1439929e7e6c099a03838"
	I0803 23:45:18.930547 1389119 cri.go:89] found id: ""
	I0803 23:45:18.930562 1389119 logs.go:276] 1 containers: [32cdc2b56b59662ee956a5208730f2f361969d0adbd1439929e7e6c099a03838]
	I0803 23:45:18.930627 1389119 ssh_runner.go:195] Run: which crictl
	I0803 23:45:18.934504 1389119 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0803 23:45:18.934584 1389119 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0803 23:45:18.972093 1389119 cri.go:89] found id: "fe656919428c126fa2c0d30d5ab782825c967bc723dfa769843ecf5bba788898"
	I0803 23:45:18.972114 1389119 cri.go:89] found id: "9d49e481b2b6243d5e9de51f4fcc1d6d8cb677348b5fd9254200807f07c7297e"
	I0803 23:45:18.972118 1389119 cri.go:89] found id: ""
	I0803 23:45:18.972126 1389119 logs.go:276] 2 containers: [fe656919428c126fa2c0d30d5ab782825c967bc723dfa769843ecf5bba788898 9d49e481b2b6243d5e9de51f4fcc1d6d8cb677348b5fd9254200807f07c7297e]
	I0803 23:45:18.972181 1389119 ssh_runner.go:195] Run: which crictl
	I0803 23:45:18.975653 1389119 ssh_runner.go:195] Run: which crictl
	I0803 23:45:18.979224 1389119 logs.go:123] Gathering logs for etcd [5cbe8058c6723a2d2086539e5d8807f73a64893442fe8bb140d579ee4aececc6] ...
	I0803 23:45:18.979249 1389119 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5cbe8058c6723a2d2086539e5d8807f73a64893442fe8bb140d579ee4aececc6"
	I0803 23:45:19.024836 1389119 logs.go:123] Gathering logs for kube-controller-manager [decd46f1eb153a03a912739b52c5220cab3bc0a68a41ce2f501a19b1547d030a] ...
	I0803 23:45:19.024865 1389119 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 decd46f1eb153a03a912739b52c5220cab3bc0a68a41ce2f501a19b1547d030a"
	I0803 23:45:19.085464 1389119 logs.go:123] Gathering logs for kube-controller-manager [e15be75c9111b317b4e56baaf7f93bc458c9080032ce09cbdd0e3fa77a17cb5e] ...
	I0803 23:45:19.085496 1389119 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e15be75c9111b317b4e56baaf7f93bc458c9080032ce09cbdd0e3fa77a17cb5e"
	I0803 23:45:19.152566 1389119 logs.go:123] Gathering logs for storage-provisioner [9d49e481b2b6243d5e9de51f4fcc1d6d8cb677348b5fd9254200807f07c7297e] ...
	I0803 23:45:19.152598 1389119 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9d49e481b2b6243d5e9de51f4fcc1d6d8cb677348b5fd9254200807f07c7297e"
	I0803 23:45:19.196949 1389119 logs.go:123] Gathering logs for containerd ...
	I0803 23:45:19.196976 1389119 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0803 23:45:19.256338 1389119 logs.go:123] Gathering logs for kubelet ...
	I0803 23:45:19.256370 1389119 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0803 23:45:19.312575 1389119 logs.go:138] Found kubelet problem: Aug 03 23:39:40 old-k8s-version-820414 kubelet[663]: E0803 23:39:40.459162     663 reflector.go:138] object-"kube-system"/"coredns-token-9d2xv": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "coredns-token-9d2xv" is forbidden: User "system:node:old-k8s-version-820414" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-820414' and this object
	W0803 23:45:19.312808 1389119 logs.go:138] Found kubelet problem: Aug 03 23:39:40 old-k8s-version-820414 kubelet[663]: E0803 23:39:40.459309     663 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:old-k8s-version-820414" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-820414' and this object
	W0803 23:45:19.313021 1389119 logs.go:138] Found kubelet problem: Aug 03 23:39:40 old-k8s-version-820414 kubelet[663]: E0803 23:39:40.459518     663 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:old-k8s-version-820414" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-820414' and this object
	W0803 23:45:19.313244 1389119 logs.go:138] Found kubelet problem: Aug 03 23:39:40 old-k8s-version-820414 kubelet[663]: E0803 23:39:40.459607     663 reflector.go:138] object-"kube-system"/"kube-proxy-token-mfhhp": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kube-proxy-token-mfhhp" is forbidden: User "system:node:old-k8s-version-820414" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-820414' and this object
	W0803 23:45:19.313461 1389119 logs.go:138] Found kubelet problem: Aug 03 23:39:40 old-k8s-version-820414 kubelet[663]: E0803 23:39:40.459825     663 reflector.go:138] object-"kube-system"/"kindnet-token-ghz8l": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kindnet-token-ghz8l" is forbidden: User "system:node:old-k8s-version-820414" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-820414' and this object
	W0803 23:45:19.313744 1389119 logs.go:138] Found kubelet problem: Aug 03 23:39:40 old-k8s-version-820414 kubelet[663]: E0803 23:39:40.470624     663 reflector.go:138] object-"kube-system"/"storage-provisioner-token-xnstr": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "storage-provisioner-token-xnstr" is forbidden: User "system:node:old-k8s-version-820414" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-820414' and this object
	W0803 23:45:19.313958 1389119 logs.go:138] Found kubelet problem: Aug 03 23:39:40 old-k8s-version-820414 kubelet[663]: E0803 23:39:40.470708     663 reflector.go:138] object-"default"/"default-token-2m78r": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "default-token-2m78r" is forbidden: User "system:node:old-k8s-version-820414" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'old-k8s-version-820414' and this object
	W0803 23:45:19.314180 1389119 logs.go:138] Found kubelet problem: Aug 03 23:39:40 old-k8s-version-820414 kubelet[663]: E0803 23:39:40.470783     663 reflector.go:138] object-"kube-system"/"metrics-server-token-nxdwh": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "metrics-server-token-nxdwh" is forbidden: User "system:node:old-k8s-version-820414" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-820414' and this object
	W0803 23:45:19.321969 1389119 logs.go:138] Found kubelet problem: Aug 03 23:39:42 old-k8s-version-820414 kubelet[663]: E0803 23:39:42.798080     663 pod_workers.go:191] Error syncing pod fb610adc-a764-488d-83dd-61cc38a45f5f ("metrics-server-9975d5f86-wm57q_kube-system(fb610adc-a764-488d-83dd-61cc38a45f5f)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0803 23:45:19.322165 1389119 logs.go:138] Found kubelet problem: Aug 03 23:39:42 old-k8s-version-820414 kubelet[663]: E0803 23:39:42.847187     663 pod_workers.go:191] Error syncing pod fb610adc-a764-488d-83dd-61cc38a45f5f ("metrics-server-9975d5f86-wm57q_kube-system(fb610adc-a764-488d-83dd-61cc38a45f5f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0803 23:45:19.325787 1389119 logs.go:138] Found kubelet problem: Aug 03 23:39:55 old-k8s-version-820414 kubelet[663]: E0803 23:39:55.456344     663 pod_workers.go:191] Error syncing pod fb610adc-a764-488d-83dd-61cc38a45f5f ("metrics-server-9975d5f86-wm57q_kube-system(fb610adc-a764-488d-83dd-61cc38a45f5f)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0803 23:45:19.327539 1389119 logs.go:138] Found kubelet problem: Aug 03 23:40:09 old-k8s-version-820414 kubelet[663]: E0803 23:40:09.485277     663 pod_workers.go:191] Error syncing pod fb610adc-a764-488d-83dd-61cc38a45f5f ("metrics-server-9975d5f86-wm57q_kube-system(fb610adc-a764-488d-83dd-61cc38a45f5f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0803 23:45:19.328844 1389119 logs.go:138] Found kubelet problem: Aug 03 23:40:12 old-k8s-version-820414 kubelet[663]: E0803 23:40:12.189775     663 pod_workers.go:191] Error syncing pod ffd039c1-683a-48d6-aaaa-75fd4714221a ("dashboard-metrics-scraper-8d5bb5db8-lfzsx_kubernetes-dashboard(ffd039c1-683a-48d6-aaaa-75fd4714221a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-lfzsx_kubernetes-dashboard(ffd039c1-683a-48d6-aaaa-75fd4714221a)"
	W0803 23:45:19.329196 1389119 logs.go:138] Found kubelet problem: Aug 03 23:40:13 old-k8s-version-820414 kubelet[663]: E0803 23:40:13.184996     663 pod_workers.go:191] Error syncing pod ffd039c1-683a-48d6-aaaa-75fd4714221a ("dashboard-metrics-scraper-8d5bb5db8-lfzsx_kubernetes-dashboard(ffd039c1-683a-48d6-aaaa-75fd4714221a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-lfzsx_kubernetes-dashboard(ffd039c1-683a-48d6-aaaa-75fd4714221a)"
	W0803 23:45:19.329642 1389119 logs.go:138] Found kubelet problem: Aug 03 23:40:15 old-k8s-version-820414 kubelet[663]: E0803 23:40:15.194567     663 pod_workers.go:191] Error syncing pod 760afa3c-130b-47d5-a942-ae27ff7ac5f5 ("storage-provisioner_kube-system(760afa3c-130b-47d5-a942-ae27ff7ac5f5)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(760afa3c-130b-47d5-a942-ae27ff7ac5f5)"
	W0803 23:45:19.329978 1389119 logs.go:138] Found kubelet problem: Aug 03 23:40:16 old-k8s-version-820414 kubelet[663]: E0803 23:40:16.619029     663 pod_workers.go:191] Error syncing pod ffd039c1-683a-48d6-aaaa-75fd4714221a ("dashboard-metrics-scraper-8d5bb5db8-lfzsx_kubernetes-dashboard(ffd039c1-683a-48d6-aaaa-75fd4714221a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-lfzsx_kubernetes-dashboard(ffd039c1-683a-48d6-aaaa-75fd4714221a)"
	W0803 23:45:19.332858 1389119 logs.go:138] Found kubelet problem: Aug 03 23:40:23 old-k8s-version-820414 kubelet[663]: E0803 23:40:23.484273     663 pod_workers.go:191] Error syncing pod fb610adc-a764-488d-83dd-61cc38a45f5f ("metrics-server-9975d5f86-wm57q_kube-system(fb610adc-a764-488d-83dd-61cc38a45f5f)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0803 23:45:19.333589 1389119 logs.go:138] Found kubelet problem: Aug 03 23:40:30 old-k8s-version-820414 kubelet[663]: E0803 23:40:30.262767     663 pod_workers.go:191] Error syncing pod ffd039c1-683a-48d6-aaaa-75fd4714221a ("dashboard-metrics-scraper-8d5bb5db8-lfzsx_kubernetes-dashboard(ffd039c1-683a-48d6-aaaa-75fd4714221a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-lfzsx_kubernetes-dashboard(ffd039c1-683a-48d6-aaaa-75fd4714221a)"
	W0803 23:45:19.333779 1389119 logs.go:138] Found kubelet problem: Aug 03 23:40:36 old-k8s-version-820414 kubelet[663]: E0803 23:40:36.524198     663 pod_workers.go:191] Error syncing pod fb610adc-a764-488d-83dd-61cc38a45f5f ("metrics-server-9975d5f86-wm57q_kube-system(fb610adc-a764-488d-83dd-61cc38a45f5f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0803 23:45:19.334110 1389119 logs.go:138] Found kubelet problem: Aug 03 23:40:36 old-k8s-version-820414 kubelet[663]: E0803 23:40:36.615532     663 pod_workers.go:191] Error syncing pod ffd039c1-683a-48d6-aaaa-75fd4714221a ("dashboard-metrics-scraper-8d5bb5db8-lfzsx_kubernetes-dashboard(ffd039c1-683a-48d6-aaaa-75fd4714221a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-lfzsx_kubernetes-dashboard(ffd039c1-683a-48d6-aaaa-75fd4714221a)"
	W0803 23:45:19.334443 1389119 logs.go:138] Found kubelet problem: Aug 03 23:40:48 old-k8s-version-820414 kubelet[663]: E0803 23:40:48.444598     663 pod_workers.go:191] Error syncing pod ffd039c1-683a-48d6-aaaa-75fd4714221a ("dashboard-metrics-scraper-8d5bb5db8-lfzsx_kubernetes-dashboard(ffd039c1-683a-48d6-aaaa-75fd4714221a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-lfzsx_kubernetes-dashboard(ffd039c1-683a-48d6-aaaa-75fd4714221a)"
	W0803 23:45:19.334633 1389119 logs.go:138] Found kubelet problem: Aug 03 23:40:49 old-k8s-version-820414 kubelet[663]: E0803 23:40:49.445255     663 pod_workers.go:191] Error syncing pod fb610adc-a764-488d-83dd-61cc38a45f5f ("metrics-server-9975d5f86-wm57q_kube-system(fb610adc-a764-488d-83dd-61cc38a45f5f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0803 23:45:19.334820 1389119 logs.go:138] Found kubelet problem: Aug 03 23:41:03 old-k8s-version-820414 kubelet[663]: E0803 23:41:03.444496     663 pod_workers.go:191] Error syncing pod fb610adc-a764-488d-83dd-61cc38a45f5f ("metrics-server-9975d5f86-wm57q_kube-system(fb610adc-a764-488d-83dd-61cc38a45f5f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0803 23:45:19.335421 1389119 logs.go:138] Found kubelet problem: Aug 03 23:41:04 old-k8s-version-820414 kubelet[663]: E0803 23:41:04.368191     663 pod_workers.go:191] Error syncing pod ffd039c1-683a-48d6-aaaa-75fd4714221a ("dashboard-metrics-scraper-8d5bb5db8-lfzsx_kubernetes-dashboard(ffd039c1-683a-48d6-aaaa-75fd4714221a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-lfzsx_kubernetes-dashboard(ffd039c1-683a-48d6-aaaa-75fd4714221a)"
	W0803 23:45:19.335758 1389119 logs.go:138] Found kubelet problem: Aug 03 23:41:06 old-k8s-version-820414 kubelet[663]: E0803 23:41:06.609272     663 pod_workers.go:191] Error syncing pod ffd039c1-683a-48d6-aaaa-75fd4714221a ("dashboard-metrics-scraper-8d5bb5db8-lfzsx_kubernetes-dashboard(ffd039c1-683a-48d6-aaaa-75fd4714221a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-lfzsx_kubernetes-dashboard(ffd039c1-683a-48d6-aaaa-75fd4714221a)"
	W0803 23:45:19.338272 1389119 logs.go:138] Found kubelet problem: Aug 03 23:41:15 old-k8s-version-820414 kubelet[663]: E0803 23:41:15.481112     663 pod_workers.go:191] Error syncing pod fb610adc-a764-488d-83dd-61cc38a45f5f ("metrics-server-9975d5f86-wm57q_kube-system(fb610adc-a764-488d-83dd-61cc38a45f5f)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0803 23:45:19.338607 1389119 logs.go:138] Found kubelet problem: Aug 03 23:41:20 old-k8s-version-820414 kubelet[663]: E0803 23:41:20.444068     663 pod_workers.go:191] Error syncing pod ffd039c1-683a-48d6-aaaa-75fd4714221a ("dashboard-metrics-scraper-8d5bb5db8-lfzsx_kubernetes-dashboard(ffd039c1-683a-48d6-aaaa-75fd4714221a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-lfzsx_kubernetes-dashboard(ffd039c1-683a-48d6-aaaa-75fd4714221a)"
	W0803 23:45:19.338795 1389119 logs.go:138] Found kubelet problem: Aug 03 23:41:26 old-k8s-version-820414 kubelet[663]: E0803 23:41:26.444477     663 pod_workers.go:191] Error syncing pod fb610adc-a764-488d-83dd-61cc38a45f5f ("metrics-server-9975d5f86-wm57q_kube-system(fb610adc-a764-488d-83dd-61cc38a45f5f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0803 23:45:19.339129 1389119 logs.go:138] Found kubelet problem: Aug 03 23:41:35 old-k8s-version-820414 kubelet[663]: E0803 23:41:35.444186     663 pod_workers.go:191] Error syncing pod ffd039c1-683a-48d6-aaaa-75fd4714221a ("dashboard-metrics-scraper-8d5bb5db8-lfzsx_kubernetes-dashboard(ffd039c1-683a-48d6-aaaa-75fd4714221a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-lfzsx_kubernetes-dashboard(ffd039c1-683a-48d6-aaaa-75fd4714221a)"
	W0803 23:45:19.339317 1389119 logs.go:138] Found kubelet problem: Aug 03 23:41:38 old-k8s-version-820414 kubelet[663]: E0803 23:41:38.444525     663 pod_workers.go:191] Error syncing pod fb610adc-a764-488d-83dd-61cc38a45f5f ("metrics-server-9975d5f86-wm57q_kube-system(fb610adc-a764-488d-83dd-61cc38a45f5f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0803 23:45:19.339919 1389119 logs.go:138] Found kubelet problem: Aug 03 23:41:47 old-k8s-version-820414 kubelet[663]: E0803 23:41:47.552369     663 pod_workers.go:191] Error syncing pod ffd039c1-683a-48d6-aaaa-75fd4714221a ("dashboard-metrics-scraper-8d5bb5db8-lfzsx_kubernetes-dashboard(ffd039c1-683a-48d6-aaaa-75fd4714221a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-lfzsx_kubernetes-dashboard(ffd039c1-683a-48d6-aaaa-75fd4714221a)"
	W0803 23:45:19.340105 1389119 logs.go:138] Found kubelet problem: Aug 03 23:41:52 old-k8s-version-820414 kubelet[663]: E0803 23:41:52.444351     663 pod_workers.go:191] Error syncing pod fb610adc-a764-488d-83dd-61cc38a45f5f ("metrics-server-9975d5f86-wm57q_kube-system(fb610adc-a764-488d-83dd-61cc38a45f5f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0803 23:45:19.340438 1389119 logs.go:138] Found kubelet problem: Aug 03 23:41:56 old-k8s-version-820414 kubelet[663]: E0803 23:41:56.604520     663 pod_workers.go:191] Error syncing pod ffd039c1-683a-48d6-aaaa-75fd4714221a ("dashboard-metrics-scraper-8d5bb5db8-lfzsx_kubernetes-dashboard(ffd039c1-683a-48d6-aaaa-75fd4714221a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-lfzsx_kubernetes-dashboard(ffd039c1-683a-48d6-aaaa-75fd4714221a)"
	W0803 23:45:19.340631 1389119 logs.go:138] Found kubelet problem: Aug 03 23:42:07 old-k8s-version-820414 kubelet[663]: E0803 23:42:07.444300     663 pod_workers.go:191] Error syncing pod fb610adc-a764-488d-83dd-61cc38a45f5f ("metrics-server-9975d5f86-wm57q_kube-system(fb610adc-a764-488d-83dd-61cc38a45f5f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0803 23:45:19.340975 1389119 logs.go:138] Found kubelet problem: Aug 03 23:42:11 old-k8s-version-820414 kubelet[663]: E0803 23:42:11.444597     663 pod_workers.go:191] Error syncing pod ffd039c1-683a-48d6-aaaa-75fd4714221a ("dashboard-metrics-scraper-8d5bb5db8-lfzsx_kubernetes-dashboard(ffd039c1-683a-48d6-aaaa-75fd4714221a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-lfzsx_kubernetes-dashboard(ffd039c1-683a-48d6-aaaa-75fd4714221a)"
	W0803 23:45:19.341293 1389119 logs.go:138] Found kubelet problem: Aug 03 23:42:22 old-k8s-version-820414 kubelet[663]: E0803 23:42:22.444598     663 pod_workers.go:191] Error syncing pod fb610adc-a764-488d-83dd-61cc38a45f5f ("metrics-server-9975d5f86-wm57q_kube-system(fb610adc-a764-488d-83dd-61cc38a45f5f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0803 23:45:19.341492 1389119 logs.go:138] Found kubelet problem: Aug 03 23:42:22 old-k8s-version-820414 kubelet[663]: E0803 23:42:22.444840     663 pod_workers.go:191] Error syncing pod ffd039c1-683a-48d6-aaaa-75fd4714221a ("dashboard-metrics-scraper-8d5bb5db8-lfzsx_kubernetes-dashboard(ffd039c1-683a-48d6-aaaa-75fd4714221a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-lfzsx_kubernetes-dashboard(ffd039c1-683a-48d6-aaaa-75fd4714221a)"
	W0803 23:45:19.341821 1389119 logs.go:138] Found kubelet problem: Aug 03 23:42:34 old-k8s-version-820414 kubelet[663]: E0803 23:42:34.444102     663 pod_workers.go:191] Error syncing pod ffd039c1-683a-48d6-aaaa-75fd4714221a ("dashboard-metrics-scraper-8d5bb5db8-lfzsx_kubernetes-dashboard(ffd039c1-683a-48d6-aaaa-75fd4714221a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-lfzsx_kubernetes-dashboard(ffd039c1-683a-48d6-aaaa-75fd4714221a)"
	W0803 23:45:19.344283 1389119 logs.go:138] Found kubelet problem: Aug 03 23:42:37 old-k8s-version-820414 kubelet[663]: E0803 23:42:37.453580     663 pod_workers.go:191] Error syncing pod fb610adc-a764-488d-83dd-61cc38a45f5f ("metrics-server-9975d5f86-wm57q_kube-system(fb610adc-a764-488d-83dd-61cc38a45f5f)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0803 23:45:19.344612 1389119 logs.go:138] Found kubelet problem: Aug 03 23:42:49 old-k8s-version-820414 kubelet[663]: E0803 23:42:49.444596     663 pod_workers.go:191] Error syncing pod ffd039c1-683a-48d6-aaaa-75fd4714221a ("dashboard-metrics-scraper-8d5bb5db8-lfzsx_kubernetes-dashboard(ffd039c1-683a-48d6-aaaa-75fd4714221a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-lfzsx_kubernetes-dashboard(ffd039c1-683a-48d6-aaaa-75fd4714221a)"
	W0803 23:45:19.344805 1389119 logs.go:138] Found kubelet problem: Aug 03 23:42:51 old-k8s-version-820414 kubelet[663]: E0803 23:42:51.445065     663 pod_workers.go:191] Error syncing pod fb610adc-a764-488d-83dd-61cc38a45f5f ("metrics-server-9975d5f86-wm57q_kube-system(fb610adc-a764-488d-83dd-61cc38a45f5f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0803 23:45:19.345144 1389119 logs.go:138] Found kubelet problem: Aug 03 23:43:02 old-k8s-version-820414 kubelet[663]: E0803 23:43:02.444071     663 pod_workers.go:191] Error syncing pod ffd039c1-683a-48d6-aaaa-75fd4714221a ("dashboard-metrics-scraper-8d5bb5db8-lfzsx_kubernetes-dashboard(ffd039c1-683a-48d6-aaaa-75fd4714221a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-lfzsx_kubernetes-dashboard(ffd039c1-683a-48d6-aaaa-75fd4714221a)"
	W0803 23:45:19.345329 1389119 logs.go:138] Found kubelet problem: Aug 03 23:43:05 old-k8s-version-820414 kubelet[663]: E0803 23:43:05.444843     663 pod_workers.go:191] Error syncing pod fb610adc-a764-488d-83dd-61cc38a45f5f ("metrics-server-9975d5f86-wm57q_kube-system(fb610adc-a764-488d-83dd-61cc38a45f5f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0803 23:45:19.345923 1389119 logs.go:138] Found kubelet problem: Aug 03 23:43:16 old-k8s-version-820414 kubelet[663]: E0803 23:43:16.774818     663 pod_workers.go:191] Error syncing pod ffd039c1-683a-48d6-aaaa-75fd4714221a ("dashboard-metrics-scraper-8d5bb5db8-lfzsx_kubernetes-dashboard(ffd039c1-683a-48d6-aaaa-75fd4714221a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-lfzsx_kubernetes-dashboard(ffd039c1-683a-48d6-aaaa-75fd4714221a)"
	W0803 23:45:19.346108 1389119 logs.go:138] Found kubelet problem: Aug 03 23:43:18 old-k8s-version-820414 kubelet[663]: E0803 23:43:18.444453     663 pod_workers.go:191] Error syncing pod fb610adc-a764-488d-83dd-61cc38a45f5f ("metrics-server-9975d5f86-wm57q_kube-system(fb610adc-a764-488d-83dd-61cc38a45f5f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0803 23:45:19.346439 1389119 logs.go:138] Found kubelet problem: Aug 03 23:43:26 old-k8s-version-820414 kubelet[663]: E0803 23:43:26.604545     663 pod_workers.go:191] Error syncing pod ffd039c1-683a-48d6-aaaa-75fd4714221a ("dashboard-metrics-scraper-8d5bb5db8-lfzsx_kubernetes-dashboard(ffd039c1-683a-48d6-aaaa-75fd4714221a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-lfzsx_kubernetes-dashboard(ffd039c1-683a-48d6-aaaa-75fd4714221a)"
	W0803 23:45:19.346628 1389119 logs.go:138] Found kubelet problem: Aug 03 23:43:29 old-k8s-version-820414 kubelet[663]: E0803 23:43:29.445577     663 pod_workers.go:191] Error syncing pod fb610adc-a764-488d-83dd-61cc38a45f5f ("metrics-server-9975d5f86-wm57q_kube-system(fb610adc-a764-488d-83dd-61cc38a45f5f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0803 23:45:19.346959 1389119 logs.go:138] Found kubelet problem: Aug 03 23:43:41 old-k8s-version-820414 kubelet[663]: E0803 23:43:41.444779     663 pod_workers.go:191] Error syncing pod ffd039c1-683a-48d6-aaaa-75fd4714221a ("dashboard-metrics-scraper-8d5bb5db8-lfzsx_kubernetes-dashboard(ffd039c1-683a-48d6-aaaa-75fd4714221a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-lfzsx_kubernetes-dashboard(ffd039c1-683a-48d6-aaaa-75fd4714221a)"
	W0803 23:45:19.347145 1389119 logs.go:138] Found kubelet problem: Aug 03 23:43:44 old-k8s-version-820414 kubelet[663]: E0803 23:43:44.444419     663 pod_workers.go:191] Error syncing pod fb610adc-a764-488d-83dd-61cc38a45f5f ("metrics-server-9975d5f86-wm57q_kube-system(fb610adc-a764-488d-83dd-61cc38a45f5f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0803 23:45:19.347476 1389119 logs.go:138] Found kubelet problem: Aug 03 23:43:56 old-k8s-version-820414 kubelet[663]: E0803 23:43:56.444259     663 pod_workers.go:191] Error syncing pod ffd039c1-683a-48d6-aaaa-75fd4714221a ("dashboard-metrics-scraper-8d5bb5db8-lfzsx_kubernetes-dashboard(ffd039c1-683a-48d6-aaaa-75fd4714221a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-lfzsx_kubernetes-dashboard(ffd039c1-683a-48d6-aaaa-75fd4714221a)"
	W0803 23:45:19.347665 1389119 logs.go:138] Found kubelet problem: Aug 03 23:43:58 old-k8s-version-820414 kubelet[663]: E0803 23:43:58.444353     663 pod_workers.go:191] Error syncing pod fb610adc-a764-488d-83dd-61cc38a45f5f ("metrics-server-9975d5f86-wm57q_kube-system(fb610adc-a764-488d-83dd-61cc38a45f5f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0803 23:45:19.347998 1389119 logs.go:138] Found kubelet problem: Aug 03 23:44:08 old-k8s-version-820414 kubelet[663]: E0803 23:44:08.444136     663 pod_workers.go:191] Error syncing pod ffd039c1-683a-48d6-aaaa-75fd4714221a ("dashboard-metrics-scraper-8d5bb5db8-lfzsx_kubernetes-dashboard(ffd039c1-683a-48d6-aaaa-75fd4714221a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-lfzsx_kubernetes-dashboard(ffd039c1-683a-48d6-aaaa-75fd4714221a)"
	W0803 23:45:19.348185 1389119 logs.go:138] Found kubelet problem: Aug 03 23:44:12 old-k8s-version-820414 kubelet[663]: E0803 23:44:12.444406     663 pod_workers.go:191] Error syncing pod fb610adc-a764-488d-83dd-61cc38a45f5f ("metrics-server-9975d5f86-wm57q_kube-system(fb610adc-a764-488d-83dd-61cc38a45f5f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0803 23:45:19.348518 1389119 logs.go:138] Found kubelet problem: Aug 03 23:44:22 old-k8s-version-820414 kubelet[663]: E0803 23:44:22.444030     663 pod_workers.go:191] Error syncing pod ffd039c1-683a-48d6-aaaa-75fd4714221a ("dashboard-metrics-scraper-8d5bb5db8-lfzsx_kubernetes-dashboard(ffd039c1-683a-48d6-aaaa-75fd4714221a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-lfzsx_kubernetes-dashboard(ffd039c1-683a-48d6-aaaa-75fd4714221a)"
	W0803 23:45:19.348705 1389119 logs.go:138] Found kubelet problem: Aug 03 23:44:23 old-k8s-version-820414 kubelet[663]: E0803 23:44:23.449169     663 pod_workers.go:191] Error syncing pod fb610adc-a764-488d-83dd-61cc38a45f5f ("metrics-server-9975d5f86-wm57q_kube-system(fb610adc-a764-488d-83dd-61cc38a45f5f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0803 23:45:19.348906 1389119 logs.go:138] Found kubelet problem: Aug 03 23:44:35 old-k8s-version-820414 kubelet[663]: E0803 23:44:35.445923     663 pod_workers.go:191] Error syncing pod fb610adc-a764-488d-83dd-61cc38a45f5f ("metrics-server-9975d5f86-wm57q_kube-system(fb610adc-a764-488d-83dd-61cc38a45f5f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0803 23:45:19.349238 1389119 logs.go:138] Found kubelet problem: Aug 03 23:44:37 old-k8s-version-820414 kubelet[663]: E0803 23:44:37.444223     663 pod_workers.go:191] Error syncing pod ffd039c1-683a-48d6-aaaa-75fd4714221a ("dashboard-metrics-scraper-8d5bb5db8-lfzsx_kubernetes-dashboard(ffd039c1-683a-48d6-aaaa-75fd4714221a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-lfzsx_kubernetes-dashboard(ffd039c1-683a-48d6-aaaa-75fd4714221a)"
	W0803 23:45:19.349570 1389119 logs.go:138] Found kubelet problem: Aug 03 23:44:50 old-k8s-version-820414 kubelet[663]: E0803 23:44:50.444155     663 pod_workers.go:191] Error syncing pod ffd039c1-683a-48d6-aaaa-75fd4714221a ("dashboard-metrics-scraper-8d5bb5db8-lfzsx_kubernetes-dashboard(ffd039c1-683a-48d6-aaaa-75fd4714221a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-lfzsx_kubernetes-dashboard(ffd039c1-683a-48d6-aaaa-75fd4714221a)"
	W0803 23:45:19.349758 1389119 logs.go:138] Found kubelet problem: Aug 03 23:44:50 old-k8s-version-820414 kubelet[663]: E0803 23:44:50.445023     663 pod_workers.go:191] Error syncing pod fb610adc-a764-488d-83dd-61cc38a45f5f ("metrics-server-9975d5f86-wm57q_kube-system(fb610adc-a764-488d-83dd-61cc38a45f5f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0803 23:45:19.349944 1389119 logs.go:138] Found kubelet problem: Aug 03 23:45:02 old-k8s-version-820414 kubelet[663]: E0803 23:45:02.444334     663 pod_workers.go:191] Error syncing pod fb610adc-a764-488d-83dd-61cc38a45f5f ("metrics-server-9975d5f86-wm57q_kube-system(fb610adc-a764-488d-83dd-61cc38a45f5f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0803 23:45:19.350279 1389119 logs.go:138] Found kubelet problem: Aug 03 23:45:04 old-k8s-version-820414 kubelet[663]: E0803 23:45:04.444530     663 pod_workers.go:191] Error syncing pod ffd039c1-683a-48d6-aaaa-75fd4714221a ("dashboard-metrics-scraper-8d5bb5db8-lfzsx_kubernetes-dashboard(ffd039c1-683a-48d6-aaaa-75fd4714221a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-lfzsx_kubernetes-dashboard(ffd039c1-683a-48d6-aaaa-75fd4714221a)"
	W0803 23:45:19.350466 1389119 logs.go:138] Found kubelet problem: Aug 03 23:45:15 old-k8s-version-820414 kubelet[663]: E0803 23:45:15.444459     663 pod_workers.go:191] Error syncing pod fb610adc-a764-488d-83dd-61cc38a45f5f ("metrics-server-9975d5f86-wm57q_kube-system(fb610adc-a764-488d-83dd-61cc38a45f5f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0803 23:45:19.350799 1389119 logs.go:138] Found kubelet problem: Aug 03 23:45:17 old-k8s-version-820414 kubelet[663]: E0803 23:45:17.444534     663 pod_workers.go:191] Error syncing pod ffd039c1-683a-48d6-aaaa-75fd4714221a ("dashboard-metrics-scraper-8d5bb5db8-lfzsx_kubernetes-dashboard(ffd039c1-683a-48d6-aaaa-75fd4714221a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-lfzsx_kubernetes-dashboard(ffd039c1-683a-48d6-aaaa-75fd4714221a)"
	I0803 23:45:19.350809 1389119 logs.go:123] Gathering logs for kube-apiserver [2dfe55d1797db163cd9199214e216cbd925e66788ac3213f4189fd6d49eb137d] ...
	I0803 23:45:19.350823 1389119 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2dfe55d1797db163cd9199214e216cbd925e66788ac3213f4189fd6d49eb137d"
	I0803 23:45:19.404505 1389119 logs.go:123] Gathering logs for coredns [b8f07cb083173520f0df79356cae5f287397885843c428a87247794fa9b0ad02] ...
	I0803 23:45:19.404535 1389119 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b8f07cb083173520f0df79356cae5f287397885843c428a87247794fa9b0ad02"
	I0803 23:45:19.444242 1389119 logs.go:123] Gathering logs for coredns [1fb74e635c11f9e357abcbe302bff95b9c25381dfe33de15354e0eedbc5564f4] ...
	I0803 23:45:19.444324 1389119 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1fb74e635c11f9e357abcbe302bff95b9c25381dfe33de15354e0eedbc5564f4"
	I0803 23:45:19.497178 1389119 logs.go:123] Gathering logs for kube-proxy [aceae907dc5dd357ef34d25fba3386bee6214298dda34f31bb744dd99aaf02c5] ...
	I0803 23:45:19.497207 1389119 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 aceae907dc5dd357ef34d25fba3386bee6214298dda34f31bb744dd99aaf02c5"
	I0803 23:45:19.535467 1389119 logs.go:123] Gathering logs for kindnet [e8b4079036ee954a62ef0fc54baf2b399cac2f40b9f5d82804b0a0b63287f659] ...
	I0803 23:45:19.535495 1389119 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e8b4079036ee954a62ef0fc54baf2b399cac2f40b9f5d82804b0a0b63287f659"
	I0803 23:45:19.590796 1389119 logs.go:123] Gathering logs for container status ...
	I0803 23:45:19.590831 1389119 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0803 23:45:19.664429 1389119 logs.go:123] Gathering logs for dmesg ...
	I0803 23:45:19.664457 1389119 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0803 23:45:19.683161 1389119 logs.go:123] Gathering logs for kube-apiserver [fb5f0343694ff099d26f408ccb02c199901b58111f3e21cf3de3c23a210d7b6d] ...
	I0803 23:45:19.683187 1389119 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fb5f0343694ff099d26f408ccb02c199901b58111f3e21cf3de3c23a210d7b6d"
	I0803 23:45:19.762085 1389119 logs.go:123] Gathering logs for kube-proxy [9d8357e9b98e5a01f551902057365a6f43c698ef3ffda88640f432be5ca342dc] ...
	I0803 23:45:19.762122 1389119 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9d8357e9b98e5a01f551902057365a6f43c698ef3ffda88640f432be5ca342dc"
	I0803 23:45:19.801223 1389119 logs.go:123] Gathering logs for kubernetes-dashboard [32cdc2b56b59662ee956a5208730f2f361969d0adbd1439929e7e6c099a03838] ...
	I0803 23:45:19.801250 1389119 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 32cdc2b56b59662ee956a5208730f2f361969d0adbd1439929e7e6c099a03838"
	I0803 23:45:19.842438 1389119 logs.go:123] Gathering logs for storage-provisioner [fe656919428c126fa2c0d30d5ab782825c967bc723dfa769843ecf5bba788898] ...
	I0803 23:45:19.842463 1389119 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fe656919428c126fa2c0d30d5ab782825c967bc723dfa769843ecf5bba788898"
	I0803 23:45:19.881283 1389119 logs.go:123] Gathering logs for describe nodes ...
	I0803 23:45:19.881309 1389119 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0803 23:45:20.030350 1389119 logs.go:123] Gathering logs for kube-scheduler [9bf21e77a35b82c0eff0d1588e963d3c44c43c2c26455b9d13e89d93c07f4a7a] ...
	I0803 23:45:20.030384 1389119 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9bf21e77a35b82c0eff0d1588e963d3c44c43c2c26455b9d13e89d93c07f4a7a"
	I0803 23:45:20.078615 1389119 logs.go:123] Gathering logs for kube-scheduler [1cf5b72b69cb46a2f0ebd44d87f295610e0a42f9bfb29e2fb3e863c1645de1b9] ...
	I0803 23:45:20.078644 1389119 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1cf5b72b69cb46a2f0ebd44d87f295610e0a42f9bfb29e2fb3e863c1645de1b9"
	I0803 23:45:20.133111 1389119 logs.go:123] Gathering logs for kindnet [0d22eec037f1c7aab116e4aa78c05368658c4097a76b252c826555e1a25f0d34] ...
	I0803 23:45:20.133146 1389119 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0d22eec037f1c7aab116e4aa78c05368658c4097a76b252c826555e1a25f0d34"
	I0803 23:45:20.187580 1389119 logs.go:123] Gathering logs for etcd [17af5d5b36407e3b6155abcf2050b778e7fba88e3816192cc23567d11a94cc3e] ...
	I0803 23:45:20.187613 1389119 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 17af5d5b36407e3b6155abcf2050b778e7fba88e3816192cc23567d11a94cc3e"
	I0803 23:45:20.231981 1389119 out.go:304] Setting ErrFile to fd 2...
	I0803 23:45:20.232005 1389119 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0803 23:45:20.232080 1389119 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0803 23:45:20.232093 1389119 out.go:239]   Aug 03 23:44:50 old-k8s-version-820414 kubelet[663]: E0803 23:44:50.445023     663 pod_workers.go:191] Error syncing pod fb610adc-a764-488d-83dd-61cc38a45f5f ("metrics-server-9975d5f86-wm57q_kube-system(fb610adc-a764-488d-83dd-61cc38a45f5f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	  Aug 03 23:44:50 old-k8s-version-820414 kubelet[663]: E0803 23:44:50.445023     663 pod_workers.go:191] Error syncing pod fb610adc-a764-488d-83dd-61cc38a45f5f ("metrics-server-9975d5f86-wm57q_kube-system(fb610adc-a764-488d-83dd-61cc38a45f5f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0803 23:45:20.232101 1389119 out.go:239]   Aug 03 23:45:02 old-k8s-version-820414 kubelet[663]: E0803 23:45:02.444334     663 pod_workers.go:191] Error syncing pod fb610adc-a764-488d-83dd-61cc38a45f5f ("metrics-server-9975d5f86-wm57q_kube-system(fb610adc-a764-488d-83dd-61cc38a45f5f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	  Aug 03 23:45:02 old-k8s-version-820414 kubelet[663]: E0803 23:45:02.444334     663 pod_workers.go:191] Error syncing pod fb610adc-a764-488d-83dd-61cc38a45f5f ("metrics-server-9975d5f86-wm57q_kube-system(fb610adc-a764-488d-83dd-61cc38a45f5f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0803 23:45:20.232128 1389119 out.go:239]   Aug 03 23:45:04 old-k8s-version-820414 kubelet[663]: E0803 23:45:04.444530     663 pod_workers.go:191] Error syncing pod ffd039c1-683a-48d6-aaaa-75fd4714221a ("dashboard-metrics-scraper-8d5bb5db8-lfzsx_kubernetes-dashboard(ffd039c1-683a-48d6-aaaa-75fd4714221a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-lfzsx_kubernetes-dashboard(ffd039c1-683a-48d6-aaaa-75fd4714221a)"
	  Aug 03 23:45:04 old-k8s-version-820414 kubelet[663]: E0803 23:45:04.444530     663 pod_workers.go:191] Error syncing pod ffd039c1-683a-48d6-aaaa-75fd4714221a ("dashboard-metrics-scraper-8d5bb5db8-lfzsx_kubernetes-dashboard(ffd039c1-683a-48d6-aaaa-75fd4714221a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-lfzsx_kubernetes-dashboard(ffd039c1-683a-48d6-aaaa-75fd4714221a)"
	W0803 23:45:20.232153 1389119 out.go:239]   Aug 03 23:45:15 old-k8s-version-820414 kubelet[663]: E0803 23:45:15.444459     663 pod_workers.go:191] Error syncing pod fb610adc-a764-488d-83dd-61cc38a45f5f ("metrics-server-9975d5f86-wm57q_kube-system(fb610adc-a764-488d-83dd-61cc38a45f5f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	  Aug 03 23:45:15 old-k8s-version-820414 kubelet[663]: E0803 23:45:15.444459     663 pod_workers.go:191] Error syncing pod fb610adc-a764-488d-83dd-61cc38a45f5f ("metrics-server-9975d5f86-wm57q_kube-system(fb610adc-a764-488d-83dd-61cc38a45f5f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0803 23:45:20.232159 1389119 out.go:239]   Aug 03 23:45:17 old-k8s-version-820414 kubelet[663]: E0803 23:45:17.444534     663 pod_workers.go:191] Error syncing pod ffd039c1-683a-48d6-aaaa-75fd4714221a ("dashboard-metrics-scraper-8d5bb5db8-lfzsx_kubernetes-dashboard(ffd039c1-683a-48d6-aaaa-75fd4714221a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-lfzsx_kubernetes-dashboard(ffd039c1-683a-48d6-aaaa-75fd4714221a)"
	  Aug 03 23:45:17 old-k8s-version-820414 kubelet[663]: E0803 23:45:17.444534     663 pod_workers.go:191] Error syncing pod ffd039c1-683a-48d6-aaaa-75fd4714221a ("dashboard-metrics-scraper-8d5bb5db8-lfzsx_kubernetes-dashboard(ffd039c1-683a-48d6-aaaa-75fd4714221a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-lfzsx_kubernetes-dashboard(ffd039c1-683a-48d6-aaaa-75fd4714221a)"
	I0803 23:45:20.232166 1389119 out.go:304] Setting ErrFile to fd 2...
	I0803 23:45:20.232171 1389119 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 23:45:30.232956 1389119 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0803 23:45:30.246240 1389119 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I0803 23:45:30.248504 1389119 out.go:177] 
	W0803 23:45:30.250531 1389119 out.go:239] X Exiting due to K8S_UNHEALTHY_CONTROL_PLANE: wait 6m0s for node: wait for healthy API server: controlPlane never updated to v1.20.0
	X Exiting due to K8S_UNHEALTHY_CONTROL_PLANE: wait 6m0s for node: wait for healthy API server: controlPlane never updated to v1.20.0
	W0803 23:45:30.250566 1389119 out.go:239] * Suggestion: Control Plane could not update, try minikube delete --all --purge
	* Suggestion: Control Plane could not update, try minikube delete --all --purge
	W0803 23:45:30.250585 1389119 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/11417
	* Related issue: https://github.com/kubernetes/minikube/issues/11417
	W0803 23:45:30.250591 1389119 out.go:239] * 
	* 
	W0803 23:45:30.251796 1389119 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0803 23:45:30.253672 1389119 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-linux-arm64 start -p old-k8s-version-820414 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.20.0": exit status 102
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-820414
helpers_test.go:235: (dbg) docker inspect old-k8s-version-820414:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "6c6e751418278304679cfaa512a57f3ed20d57dc837f1d88d4e3d9c1d83f2e78",
	        "Created": "2024-08-03T23:37:02.438080251Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1389326,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-08-03T23:39:14.618864833Z",
	            "FinishedAt": "2024-08-03T23:39:13.673216016Z"
	        },
	        "Image": "sha256:2cd84ab2172023a68162f38a55db46353562cea41552fd8e8bdec97b31b2c495",
	        "ResolvConfPath": "/var/lib/docker/containers/6c6e751418278304679cfaa512a57f3ed20d57dc837f1d88d4e3d9c1d83f2e78/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/6c6e751418278304679cfaa512a57f3ed20d57dc837f1d88d4e3d9c1d83f2e78/hostname",
	        "HostsPath": "/var/lib/docker/containers/6c6e751418278304679cfaa512a57f3ed20d57dc837f1d88d4e3d9c1d83f2e78/hosts",
	        "LogPath": "/var/lib/docker/containers/6c6e751418278304679cfaa512a57f3ed20d57dc837f1d88d4e3d9c1d83f2e78/6c6e751418278304679cfaa512a57f3ed20d57dc837f1d88d4e3d9c1d83f2e78-json.log",
	        "Name": "/old-k8s-version-820414",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-820414:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-820414",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/64d5c58ba6345085c5b0981fa9b38f39cfe947b6a5ebb3e108990e7097fc526f-init/diff:/var/lib/docker/overlay2/d0e9013ff93972be10de1ce499c76c412f16d87933b328b08c9d90d7f75831bd/diff",
	                "MergedDir": "/var/lib/docker/overlay2/64d5c58ba6345085c5b0981fa9b38f39cfe947b6a5ebb3e108990e7097fc526f/merged",
	                "UpperDir": "/var/lib/docker/overlay2/64d5c58ba6345085c5b0981fa9b38f39cfe947b6a5ebb3e108990e7097fc526f/diff",
	                "WorkDir": "/var/lib/docker/overlay2/64d5c58ba6345085c5b0981fa9b38f39cfe947b6a5ebb3e108990e7097fc526f/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-820414",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-820414/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-820414",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-820414",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-820414",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "5e88335dc791ad5b1c914d2745f7968c79755cff193195883ecb050df6f361ac",
	            "SandboxKey": "/var/run/docker/netns/5e88335dc791",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34543"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34544"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34547"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34545"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34546"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-820414": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:4c:02",
	                    "DriverOpts": null,
	                    "NetworkID": "83efb4e5bf795e056e8f63fc11974546adc9aa8b28497a6b30bd5cb3c0bf7af0",
	                    "EndpointID": "6fb89e03b515d7ab352905a7820822614397829671ee3f994a40283578e619b3",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-820414",
	                        "6c6e75141827"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-820414 -n old-k8s-version-820414
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-820414 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p old-k8s-version-820414 logs -n 25: (1.962762514s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/SecondStart logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p cilium-374898 sudo find                             | cilium-374898             | jenkins | v1.33.1 | 03 Aug 24 23:35 UTC |                     |
	|         | /etc/crio -type f -exec sh -c                          |                           |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                           |         |         |                     |                     |
	| ssh     | -p cilium-374898 sudo crio                             | cilium-374898             | jenkins | v1.33.1 | 03 Aug 24 23:35 UTC |                     |
	|         | config                                                 |                           |         |         |                     |                     |
	| delete  | -p cilium-374898                                       | cilium-374898             | jenkins | v1.33.1 | 03 Aug 24 23:35 UTC | 03 Aug 24 23:35 UTC |
	| start   | -p force-systemd-env-180357                            | force-systemd-env-180357  | jenkins | v1.33.1 | 03 Aug 24 23:35 UTC | 03 Aug 24 23:36 UTC |
	|         | --memory=2048                                          |                           |         |         |                     |                     |
	|         | --alsologtostderr                                      |                           |         |         |                     |                     |
	|         | -v=5 --driver=docker                                   |                           |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                           |         |         |                     |                     |
	| ssh     | force-systemd-flag-272336                              | force-systemd-flag-272336 | jenkins | v1.33.1 | 03 Aug 24 23:35 UTC | 03 Aug 24 23:35 UTC |
	|         | ssh cat                                                |                           |         |         |                     |                     |
	|         | /etc/containerd/config.toml                            |                           |         |         |                     |                     |
	| delete  | -p force-systemd-flag-272336                           | force-systemd-flag-272336 | jenkins | v1.33.1 | 03 Aug 24 23:35 UTC | 03 Aug 24 23:35 UTC |
	| start   | -p cert-expiration-764783                              | cert-expiration-764783    | jenkins | v1.33.1 | 03 Aug 24 23:35 UTC | 03 Aug 24 23:36 UTC |
	|         | --memory=2048                                          |                           |         |         |                     |                     |
	|         | --cert-expiration=3m                                   |                           |         |         |                     |                     |
	|         | --driver=docker                                        |                           |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                           |         |         |                     |                     |
	| ssh     | force-systemd-env-180357                               | force-systemd-env-180357  | jenkins | v1.33.1 | 03 Aug 24 23:36 UTC | 03 Aug 24 23:36 UTC |
	|         | ssh cat                                                |                           |         |         |                     |                     |
	|         | /etc/containerd/config.toml                            |                           |         |         |                     |                     |
	| delete  | -p force-systemd-env-180357                            | force-systemd-env-180357  | jenkins | v1.33.1 | 03 Aug 24 23:36 UTC | 03 Aug 24 23:36 UTC |
	| start   | -p cert-options-142692                                 | cert-options-142692       | jenkins | v1.33.1 | 03 Aug 24 23:36 UTC | 03 Aug 24 23:36 UTC |
	|         | --memory=2048                                          |                           |         |         |                     |                     |
	|         | --apiserver-ips=127.0.0.1                              |                           |         |         |                     |                     |
	|         | --apiserver-ips=192.168.15.15                          |                           |         |         |                     |                     |
	|         | --apiserver-names=localhost                            |                           |         |         |                     |                     |
	|         | --apiserver-names=www.google.com                       |                           |         |         |                     |                     |
	|         | --apiserver-port=8555                                  |                           |         |         |                     |                     |
	|         | --driver=docker                                        |                           |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                           |         |         |                     |                     |
	| ssh     | cert-options-142692 ssh                                | cert-options-142692       | jenkins | v1.33.1 | 03 Aug 24 23:36 UTC | 03 Aug 24 23:36 UTC |
	|         | openssl x509 -text -noout -in                          |                           |         |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt                  |                           |         |         |                     |                     |
	| ssh     | -p cert-options-142692 -- sudo                         | cert-options-142692       | jenkins | v1.33.1 | 03 Aug 24 23:36 UTC | 03 Aug 24 23:36 UTC |
	|         | cat /etc/kubernetes/admin.conf                         |                           |         |         |                     |                     |
	| delete  | -p cert-options-142692                                 | cert-options-142692       | jenkins | v1.33.1 | 03 Aug 24 23:36 UTC | 03 Aug 24 23:36 UTC |
	| start   | -p old-k8s-version-820414                              | old-k8s-version-820414    | jenkins | v1.33.1 | 03 Aug 24 23:36 UTC | 03 Aug 24 23:38 UTC |
	|         | --memory=2200                                          |                           |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                           |         |         |                     |                     |
	|         | --kvm-network=default                                  |                           |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                           |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                           |         |         |                     |                     |
	|         | --keep-context=false                                   |                           |         |         |                     |                     |
	|         | --driver=docker                                        |                           |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                           |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-820414        | old-k8s-version-820414    | jenkins | v1.33.1 | 03 Aug 24 23:39 UTC | 03 Aug 24 23:39 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                           |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                           |         |         |                     |                     |
	| stop    | -p old-k8s-version-820414                              | old-k8s-version-820414    | jenkins | v1.33.1 | 03 Aug 24 23:39 UTC | 03 Aug 24 23:39 UTC |
	|         | --alsologtostderr -v=3                                 |                           |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-820414             | old-k8s-version-820414    | jenkins | v1.33.1 | 03 Aug 24 23:39 UTC | 03 Aug 24 23:39 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                           |         |         |                     |                     |
	| start   | -p old-k8s-version-820414                              | old-k8s-version-820414    | jenkins | v1.33.1 | 03 Aug 24 23:39 UTC |                     |
	|         | --memory=2200                                          |                           |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                           |         |         |                     |                     |
	|         | --kvm-network=default                                  |                           |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                           |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                           |         |         |                     |                     |
	|         | --keep-context=false                                   |                           |         |         |                     |                     |
	|         | --driver=docker                                        |                           |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                           |         |         |                     |                     |
	| start   | -p cert-expiration-764783                              | cert-expiration-764783    | jenkins | v1.33.1 | 03 Aug 24 23:39 UTC | 03 Aug 24 23:39 UTC |
	|         | --memory=2048                                          |                           |         |         |                     |                     |
	|         | --cert-expiration=8760h                                |                           |         |         |                     |                     |
	|         | --driver=docker                                        |                           |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                           |         |         |                     |                     |
	| delete  | -p cert-expiration-764783                              | cert-expiration-764783    | jenkins | v1.33.1 | 03 Aug 24 23:39 UTC | 03 Aug 24 23:39 UTC |
	| start   | -p no-preload-344284                                   | no-preload-344284         | jenkins | v1.33.1 | 03 Aug 24 23:39 UTC | 03 Aug 24 23:41 UTC |
	|         | --memory=2200 --alsologtostderr                        |                           |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                           |         |         |                     |                     |
	|         | --driver=docker                                        |                           |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-rc.0                      |                           |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-344284             | no-preload-344284         | jenkins | v1.33.1 | 03 Aug 24 23:41 UTC | 03 Aug 24 23:41 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                           |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                           |         |         |                     |                     |
	| stop    | -p no-preload-344284                                   | no-preload-344284         | jenkins | v1.33.1 | 03 Aug 24 23:41 UTC | 03 Aug 24 23:41 UTC |
	|         | --alsologtostderr -v=3                                 |                           |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-344284                  | no-preload-344284         | jenkins | v1.33.1 | 03 Aug 24 23:41 UTC | 03 Aug 24 23:41 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                           |         |         |                     |                     |
	| start   | -p no-preload-344284                                   | no-preload-344284         | jenkins | v1.33.1 | 03 Aug 24 23:41 UTC |                     |
	|         | --memory=2200 --alsologtostderr                        |                           |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                           |         |         |                     |                     |
	|         | --driver=docker                                        |                           |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-rc.0                      |                           |         |         |                     |                     |
	|---------|--------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/03 23:41:29
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.22.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0803 23:41:29.218169 1397775 out.go:291] Setting OutFile to fd 1 ...
	I0803 23:41:29.218317 1397775 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 23:41:29.218329 1397775 out.go:304] Setting ErrFile to fd 2...
	I0803 23:41:29.218335 1397775 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 23:41:29.218587 1397775 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19364-1180294/.minikube/bin
	I0803 23:41:29.218961 1397775 out.go:298] Setting JSON to false
	I0803 23:41:29.220079 1397775 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":30235,"bootTime":1722698255,"procs":238,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0803 23:41:29.220159 1397775 start.go:139] virtualization:  
	I0803 23:41:29.224282 1397775 out.go:177] * [no-preload-344284] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	I0803 23:41:29.226307 1397775 out.go:177]   - MINIKUBE_LOCATION=19364
	I0803 23:41:29.226648 1397775 notify.go:220] Checking for updates...
	I0803 23:41:29.230599 1397775 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0803 23:41:29.232433 1397775 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19364-1180294/kubeconfig
	I0803 23:41:29.234465 1397775 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19364-1180294/.minikube
	I0803 23:41:29.236144 1397775 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0803 23:41:29.238172 1397775 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0803 23:41:29.240437 1397775 config.go:182] Loaded profile config "no-preload-344284": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.0-rc.0
	I0803 23:41:29.241163 1397775 driver.go:392] Setting default libvirt URI to qemu:///system
	I0803 23:41:29.274571 1397775 docker.go:123] docker version: linux-27.1.1:Docker Engine - Community
	I0803 23:41:29.274679 1397775 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0803 23:41:29.350870 1397775 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:41 OomKillDisable:true NGoroutines:53 SystemTime:2024-08-03 23:41:29.340541295 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1066-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214896640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41 Expected:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1]] Warnings:<nil>}}
	I0803 23:41:29.350986 1397775 docker.go:307] overlay module found
	I0803 23:41:29.353223 1397775 out.go:177] * Using the docker driver based on existing profile
	I0803 23:41:29.354914 1397775 start.go:297] selected driver: docker
	I0803 23:41:29.354934 1397775 start.go:901] validating driver "docker" against &{Name:no-preload-344284 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0-rc.0 ClusterName:no-preload-344284 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.31.0-rc.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0803 23:41:29.355063 1397775 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0803 23:41:29.355715 1397775 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0803 23:41:29.421048 1397775 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:41 OomKillDisable:true NGoroutines:53 SystemTime:2024-08-03 23:41:29.405776002 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1066-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214896640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41 Expected:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1]] Warnings:<nil>}}
	I0803 23:41:29.421406 1397775 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0803 23:41:29.421428 1397775 cni.go:84] Creating CNI manager for ""
	I0803 23:41:29.421436 1397775 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0803 23:41:29.421483 1397775 start.go:340] cluster config:
	{Name:no-preload-344284 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0-rc.0 ClusterName:no-preload-344284 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.31.0-rc.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:d
ocker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0803 23:41:29.424922 1397775 out.go:177] * Starting "no-preload-344284" primary control-plane node in "no-preload-344284" cluster
	I0803 23:41:29.429374 1397775 cache.go:121] Beginning downloading kic base image for docker with containerd
	I0803 23:41:29.431941 1397775 out.go:177] * Pulling base image v0.0.44-1721902582-19326 ...
	I0803 23:41:29.433810 1397775 preload.go:131] Checking if preload exists for k8s version v1.31.0-rc.0 and runtime containerd
	I0803 23:41:29.433891 1397775 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 in local docker daemon
	I0803 23:41:29.433971 1397775 profile.go:143] Saving config to /home/jenkins/minikube-integration/19364-1180294/.minikube/profiles/no-preload-344284/config.json ...
	I0803 23:41:29.434280 1397775 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.0-rc.0/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.0-rc.0/bin/linux/arm64/kubeadm.sha256
	W0803 23:41:29.455672 1397775 image.go:95] image gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 is of wrong architecture
	I0803 23:41:29.455689 1397775 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 to local cache
	I0803 23:41:29.455766 1397775 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 in local cache directory
	I0803 23:41:29.455782 1397775 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 in local cache directory, skipping pull
	I0803 23:41:29.455787 1397775 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 exists in cache, skipping pull
	I0803 23:41:29.455795 1397775 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 as a tarball
	I0803 23:41:29.455800 1397775 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 from local cache
	I0803 23:41:29.597295 1397775 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 from cached tarball
	I0803 23:41:29.597332 1397775 cache.go:194] Successfully downloaded all kic artifacts
	I0803 23:41:29.597362 1397775 start.go:360] acquireMachinesLock for no-preload-344284: {Name:mk78512c904592d48687d7f0baa079ce0fa7c32c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0803 23:41:29.597442 1397775 start.go:364] duration metric: took 52.267µs to acquireMachinesLock for "no-preload-344284"
	I0803 23:41:29.597466 1397775 start.go:96] Skipping create...Using existing machine configuration
	I0803 23:41:29.597474 1397775 fix.go:54] fixHost starting: 
	I0803 23:41:29.597768 1397775 cli_runner.go:164] Run: docker container inspect no-preload-344284 --format={{.State.Status}}
	I0803 23:41:29.623556 1397775 fix.go:112] recreateIfNeeded on no-preload-344284: state=Stopped err=<nil>
	W0803 23:41:29.623583 1397775 fix.go:138] unexpected machine state, will restart: <nil>
	I0803 23:41:29.624809 1397775 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.0-rc.0/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.0-rc.0/bin/linux/arm64/kubeadm.sha256
	I0803 23:41:29.625643 1397775 out.go:177] * Restarting existing docker container for "no-preload-344284" ...
	I0803 23:41:29.199602 1389119 pod_ready.go:102] pod "metrics-server-9975d5f86-wm57q" in "kube-system" namespace has status "Ready":"False"
	I0803 23:41:31.698396 1389119 pod_ready.go:102] pod "metrics-server-9975d5f86-wm57q" in "kube-system" namespace has status "Ready":"False"
	I0803 23:41:33.700341 1389119 pod_ready.go:102] pod "metrics-server-9975d5f86-wm57q" in "kube-system" namespace has status "Ready":"False"
	I0803 23:41:29.627400 1397775 cli_runner.go:164] Run: docker start no-preload-344284
	I0803 23:41:29.852876 1397775 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.0-rc.0/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.0-rc.0/bin/linux/arm64/kubeadm.sha256
	I0803 23:41:29.971791 1397775 cli_runner.go:164] Run: docker container inspect no-preload-344284 --format={{.State.Status}}
	I0803 23:41:29.994429 1397775 kic.go:430] container "no-preload-344284" state is running.
	I0803 23:41:29.994841 1397775 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-344284
	I0803 23:41:30.041163 1397775 cache.go:107] acquiring lock: {Name:mkdda788fe5b4c1329a18cab528bc492e47e5a01 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0803 23:41:30.041272 1397775 cache.go:115] /home/jenkins/minikube-integration/19364-1180294/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0803 23:41:30.041282 1397775 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/19364-1180294/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 134.424µs
	I0803 23:41:30.041291 1397775 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/19364-1180294/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0803 23:41:30.041302 1397775 cache.go:107] acquiring lock: {Name:mkdcf804b7148660376bcf5829e4d8cd9cb607ec Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0803 23:41:30.041347 1397775 cache.go:115] /home/jenkins/minikube-integration/19364-1180294/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.0-rc.0 exists
	I0803 23:41:30.041353 1397775 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.31.0-rc.0" -> "/home/jenkins/minikube-integration/19364-1180294/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.0-rc.0" took 52.981µs
	I0803 23:41:30.041359 1397775 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.31.0-rc.0 -> /home/jenkins/minikube-integration/19364-1180294/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.0-rc.0 succeeded
	I0803 23:41:30.041376 1397775 cache.go:107] acquiring lock: {Name:mk98ec6f549540966c0e37778fd3b1e07d473a1f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0803 23:41:30.041418 1397775 cache.go:115] /home/jenkins/minikube-integration/19364-1180294/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.0-rc.0 exists
	I0803 23:41:30.041423 1397775 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.31.0-rc.0" -> "/home/jenkins/minikube-integration/19364-1180294/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.0-rc.0" took 54.351µs
	I0803 23:41:30.041430 1397775 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.31.0-rc.0 -> /home/jenkins/minikube-integration/19364-1180294/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.0-rc.0 succeeded
	I0803 23:41:30.041444 1397775 cache.go:107] acquiring lock: {Name:mk679a04e82f202846e26c1a65983b4cb9718afd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0803 23:41:30.041471 1397775 cache.go:115] /home/jenkins/minikube-integration/19364-1180294/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.0-rc.0 exists
	I0803 23:41:30.041481 1397775 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.31.0-rc.0" -> "/home/jenkins/minikube-integration/19364-1180294/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.0-rc.0" took 38.655µs
	I0803 23:41:30.041488 1397775 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.31.0-rc.0 -> /home/jenkins/minikube-integration/19364-1180294/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.0-rc.0 succeeded
	I0803 23:41:30.041497 1397775 cache.go:107] acquiring lock: {Name:mk6f4cd55b127e07c3fc5dfac24312217486517a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0803 23:41:30.041525 1397775 cache.go:115] /home/jenkins/minikube-integration/19364-1180294/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.0-rc.0 exists
	I0803 23:41:30.041530 1397775 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.31.0-rc.0" -> "/home/jenkins/minikube-integration/19364-1180294/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.0-rc.0" took 34.01µs
	I0803 23:41:30.041536 1397775 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.31.0-rc.0 -> /home/jenkins/minikube-integration/19364-1180294/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.0-rc.0 succeeded
	I0803 23:41:30.041547 1397775 cache.go:107] acquiring lock: {Name:mk6217b1aea93e04aece3289712b91de4074ba34 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0803 23:41:30.041574 1397775 cache.go:115] /home/jenkins/minikube-integration/19364-1180294/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10 exists
	I0803 23:41:30.041580 1397775 cache.go:96] cache image "registry.k8s.io/pause:3.10" -> "/home/jenkins/minikube-integration/19364-1180294/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10" took 34.035µs
	I0803 23:41:30.041585 1397775 cache.go:80] save to tar file registry.k8s.io/pause:3.10 -> /home/jenkins/minikube-integration/19364-1180294/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10 succeeded
	I0803 23:41:30.041595 1397775 cache.go:107] acquiring lock: {Name:mk0099cd3ac17052d2ad64f7e0ee1cb16ad350f8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0803 23:41:30.041620 1397775 cache.go:115] /home/jenkins/minikube-integration/19364-1180294/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0 exists
	I0803 23:41:30.041626 1397775 cache.go:96] cache image "registry.k8s.io/etcd:3.5.15-0" -> "/home/jenkins/minikube-integration/19364-1180294/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0" took 32.566µs
	I0803 23:41:30.041632 1397775 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.15-0 -> /home/jenkins/minikube-integration/19364-1180294/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0 succeeded
	I0803 23:41:30.041646 1397775 cache.go:107] acquiring lock: {Name:mk7835c40770d4044f387910d0b388b17002de99 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0803 23:41:30.041755 1397775 cache.go:115] /home/jenkins/minikube-integration/19364-1180294/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1 exists
	I0803 23:41:30.041761 1397775 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.11.1" -> "/home/jenkins/minikube-integration/19364-1180294/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1" took 121.599µs
	I0803 23:41:30.041768 1397775 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.11.1 -> /home/jenkins/minikube-integration/19364-1180294/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1 succeeded
	I0803 23:41:30.041774 1397775 cache.go:87] Successfully saved all images to host disk.
	I0803 23:41:30.048960 1397775 profile.go:143] Saving config to /home/jenkins/minikube-integration/19364-1180294/.minikube/profiles/no-preload-344284/config.json ...
	I0803 23:41:30.049359 1397775 machine.go:94] provisionDockerMachine start ...
	I0803 23:41:30.049470 1397775 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-344284
	I0803 23:41:30.076402 1397775 main.go:141] libmachine: Using SSH client type: native
	I0803 23:41:30.076966 1397775 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e2cd0] 0x3e5530 <nil>  [] 0s} 127.0.0.1 34553 <nil> <nil>}
	I0803 23:41:30.076987 1397775 main.go:141] libmachine: About to run SSH command:
	hostname
	I0803 23:41:30.077641 1397775 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:58736->127.0.0.1:34553: read: connection reset by peer
	I0803 23:41:33.212265 1397775 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-344284
	
	I0803 23:41:33.212292 1397775 ubuntu.go:169] provisioning hostname "no-preload-344284"
	I0803 23:41:33.212361 1397775 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-344284
	I0803 23:41:33.229640 1397775 main.go:141] libmachine: Using SSH client type: native
	I0803 23:41:33.229893 1397775 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e2cd0] 0x3e5530 <nil>  [] 0s} 127.0.0.1 34553 <nil> <nil>}
	I0803 23:41:33.229913 1397775 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-344284 && echo "no-preload-344284" | sudo tee /etc/hostname
	I0803 23:41:33.373142 1397775 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-344284
	
	I0803 23:41:33.373263 1397775 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-344284
	I0803 23:41:33.392033 1397775 main.go:141] libmachine: Using SSH client type: native
	I0803 23:41:33.392271 1397775 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e2cd0] 0x3e5530 <nil>  [] 0s} 127.0.0.1 34553 <nil> <nil>}
	I0803 23:41:33.392287 1397775 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-344284' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-344284/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-344284' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0803 23:41:33.532959 1397775 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0803 23:41:33.533001 1397775 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19364-1180294/.minikube CaCertPath:/home/jenkins/minikube-integration/19364-1180294/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19364-1180294/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19364-1180294/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19364-1180294/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19364-1180294/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19364-1180294/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19364-1180294/.minikube}
	I0803 23:41:33.533032 1397775 ubuntu.go:177] setting up certificates
	I0803 23:41:33.533056 1397775 provision.go:84] configureAuth start
	I0803 23:41:33.533129 1397775 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-344284
	I0803 23:41:33.549986 1397775 provision.go:143] copyHostCerts
	I0803 23:41:33.550071 1397775 exec_runner.go:144] found /home/jenkins/minikube-integration/19364-1180294/.minikube/ca.pem, removing ...
	I0803 23:41:33.550085 1397775 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19364-1180294/.minikube/ca.pem
	I0803 23:41:33.550160 1397775 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19364-1180294/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19364-1180294/.minikube/ca.pem (1078 bytes)
	I0803 23:41:33.550273 1397775 exec_runner.go:144] found /home/jenkins/minikube-integration/19364-1180294/.minikube/cert.pem, removing ...
	I0803 23:41:33.550290 1397775 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19364-1180294/.minikube/cert.pem
	I0803 23:41:33.550318 1397775 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19364-1180294/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19364-1180294/.minikube/cert.pem (1123 bytes)
	I0803 23:41:33.550387 1397775 exec_runner.go:144] found /home/jenkins/minikube-integration/19364-1180294/.minikube/key.pem, removing ...
	I0803 23:41:33.550396 1397775 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19364-1180294/.minikube/key.pem
	I0803 23:41:33.550423 1397775 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19364-1180294/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19364-1180294/.minikube/key.pem (1675 bytes)
	I0803 23:41:33.550487 1397775 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19364-1180294/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19364-1180294/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19364-1180294/.minikube/certs/ca-key.pem org=jenkins.no-preload-344284 san=[127.0.0.1 192.168.94.2 localhost minikube no-preload-344284]
	I0803 23:41:33.987993 1397775 provision.go:177] copyRemoteCerts
	I0803 23:41:33.988089 1397775 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0803 23:41:33.988135 1397775 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-344284
	I0803 23:41:34.007168 1397775 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34553 SSHKeyPath:/home/jenkins/minikube-integration/19364-1180294/.minikube/machines/no-preload-344284/id_rsa Username:docker}
	I0803 23:41:34.106475 1397775 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-1180294/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0803 23:41:34.132861 1397775 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-1180294/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0803 23:41:34.158432 1397775 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-1180294/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0803 23:41:34.184285 1397775 provision.go:87] duration metric: took 651.210583ms to configureAuth
	I0803 23:41:34.184311 1397775 ubuntu.go:193] setting minikube options for container-runtime
	I0803 23:41:34.184508 1397775 config.go:182] Loaded profile config "no-preload-344284": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.0-rc.0
	I0803 23:41:34.184515 1397775 machine.go:97] duration metric: took 4.135145175s to provisionDockerMachine
	I0803 23:41:34.184522 1397775 start.go:293] postStartSetup for "no-preload-344284" (driver="docker")
	I0803 23:41:34.184534 1397775 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0803 23:41:34.184581 1397775 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0803 23:41:34.184618 1397775 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-344284
	I0803 23:41:34.206915 1397775 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34553 SSHKeyPath:/home/jenkins/minikube-integration/19364-1180294/.minikube/machines/no-preload-344284/id_rsa Username:docker}
	I0803 23:41:34.303076 1397775 ssh_runner.go:195] Run: cat /etc/os-release
	I0803 23:41:34.306766 1397775 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0803 23:41:34.306804 1397775 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0803 23:41:34.306815 1397775 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0803 23:41:34.306823 1397775 info.go:137] Remote host: Ubuntu 22.04.4 LTS
	I0803 23:41:34.306833 1397775 filesync.go:126] Scanning /home/jenkins/minikube-integration/19364-1180294/.minikube/addons for local assets ...
	I0803 23:41:34.306889 1397775 filesync.go:126] Scanning /home/jenkins/minikube-integration/19364-1180294/.minikube/files for local assets ...
	I0803 23:41:34.306976 1397775 filesync.go:149] local asset: /home/jenkins/minikube-integration/19364-1180294/.minikube/files/etc/ssl/certs/11857022.pem -> 11857022.pem in /etc/ssl/certs
	I0803 23:41:34.307145 1397775 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0803 23:41:34.316343 1397775 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-1180294/.minikube/files/etc/ssl/certs/11857022.pem --> /etc/ssl/certs/11857022.pem (1708 bytes)
	I0803 23:41:34.341390 1397775 start.go:296] duration metric: took 156.84653ms for postStartSetup
	I0803 23:41:34.341475 1397775 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0803 23:41:34.341531 1397775 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-344284
	I0803 23:41:34.357768 1397775 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34553 SSHKeyPath:/home/jenkins/minikube-integration/19364-1180294/.minikube/machines/no-preload-344284/id_rsa Username:docker}
	I0803 23:41:34.454025 1397775 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0803 23:41:34.459418 1397775 fix.go:56] duration metric: took 4.861935607s for fixHost
	I0803 23:41:34.459445 1397775 start.go:83] releasing machines lock for "no-preload-344284", held for 4.861990376s
	I0803 23:41:34.459521 1397775 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-344284
	I0803 23:41:34.481333 1397775 ssh_runner.go:195] Run: cat /version.json
	I0803 23:41:34.481390 1397775 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-344284
	I0803 23:41:34.482013 1397775 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0803 23:41:34.482084 1397775 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-344284
	I0803 23:41:34.505674 1397775 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34553 SSHKeyPath:/home/jenkins/minikube-integration/19364-1180294/.minikube/machines/no-preload-344284/id_rsa Username:docker}
	I0803 23:41:34.506177 1397775 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34553 SSHKeyPath:/home/jenkins/minikube-integration/19364-1180294/.minikube/machines/no-preload-344284/id_rsa Username:docker}
	I0803 23:41:34.732859 1397775 ssh_runner.go:195] Run: systemctl --version
	I0803 23:41:34.737682 1397775 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0803 23:41:34.742176 1397775 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0803 23:41:34.759824 1397775 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0803 23:41:34.759914 1397775 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0803 23:41:34.769383 1397775 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0803 23:41:34.769460 1397775 start.go:495] detecting cgroup driver to use...
	I0803 23:41:34.769508 1397775 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0803 23:41:34.769598 1397775 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0803 23:41:34.785176 1397775 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0803 23:41:34.799439 1397775 docker.go:217] disabling cri-docker service (if available) ...
	I0803 23:41:34.799539 1397775 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0803 23:41:34.813678 1397775 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0803 23:41:34.825963 1397775 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0803 23:41:34.914977 1397775 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0803 23:41:35.000450 1397775 docker.go:233] disabling docker service ...
	I0803 23:41:35.000560 1397775 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0803 23:41:35.020749 1397775 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0803 23:41:35.034351 1397775 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0803 23:41:35.133209 1397775 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0803 23:41:35.218568 1397775 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0803 23:41:35.232080 1397775 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0803 23:41:35.248913 1397775 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.0-rc.0/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.0-rc.0/bin/linux/arm64/kubeadm.sha256
	I0803 23:41:35.413862 1397775 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0803 23:41:35.424812 1397775 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0803 23:41:35.434808 1397775 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0803 23:41:35.434878 1397775 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0803 23:41:35.457642 1397775 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0803 23:41:35.475472 1397775 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0803 23:41:35.485408 1397775 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0803 23:41:35.495109 1397775 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0803 23:41:35.504684 1397775 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0803 23:41:35.515149 1397775 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0803 23:41:35.525965 1397775 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0803 23:41:35.536479 1397775 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0803 23:41:35.545119 1397775 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0803 23:41:35.553810 1397775 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0803 23:41:35.643625 1397775 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0803 23:41:35.798765 1397775 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I0803 23:41:35.798855 1397775 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0803 23:41:35.803138 1397775 start.go:563] Will wait 60s for crictl version
	I0803 23:41:35.803208 1397775 ssh_runner.go:195] Run: which crictl
	I0803 23:41:35.807856 1397775 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0803 23:41:35.849701 1397775 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.7.19
	RuntimeApiVersion:  v1
	I0803 23:41:35.849781 1397775 ssh_runner.go:195] Run: containerd --version
	I0803 23:41:35.876282 1397775 ssh_runner.go:195] Run: containerd --version
	I0803 23:41:35.904444 1397775 out.go:177] * Preparing Kubernetes v1.31.0-rc.0 on containerd 1.7.19 ...
	I0803 23:41:35.906686 1397775 cli_runner.go:164] Run: docker network inspect no-preload-344284 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0803 23:41:35.921726 1397775 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I0803 23:41:35.925280 1397775 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0803 23:41:35.935980 1397775 kubeadm.go:883] updating cluster {Name:no-preload-344284 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0-rc.0 ClusterName:no-preload-344284 Namespace:default APIServerHAVIP: APIServerName:minikubeC
A APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.31.0-rc.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:
/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0803 23:41:35.936192 1397775 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.0-rc.0/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.0-rc.0/bin/linux/arm64/kubeadm.sha256
	I0803 23:41:36.104907 1397775 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.0-rc.0/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.0-rc.0/bin/linux/arm64/kubeadm.sha256
	I0803 23:41:36.263928 1397775 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.0-rc.0/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.0-rc.0/bin/linux/arm64/kubeadm.sha256
	I0803 23:41:36.425391 1397775 preload.go:131] Checking if preload exists for k8s version v1.31.0-rc.0 and runtime containerd
	I0803 23:41:36.425474 1397775 ssh_runner.go:195] Run: sudo crictl images --output json
	I0803 23:41:36.474617 1397775 containerd.go:627] all images are preloaded for containerd runtime.
	I0803 23:41:36.474645 1397775 cache_images.go:84] Images are preloaded, skipping loading
	I0803 23:41:36.474653 1397775 kubeadm.go:934] updating node { 192.168.94.2 8443 v1.31.0-rc.0 containerd true true} ...
	I0803 23:41:36.474756 1397775 kubeadm.go:946] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0-rc.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-344284 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0-rc.0 ClusterName:no-preload-344284 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0803 23:41:36.474825 1397775 ssh_runner.go:195] Run: sudo crictl info
	I0803 23:41:36.517013 1397775 cni.go:84] Creating CNI manager for ""
	I0803 23:41:36.517041 1397775 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0803 23:41:36.517053 1397775 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0803 23:41:36.517077 1397775 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8443 KubernetesVersion:v1.31.0-rc.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-344284 NodeName:no-preload-344284 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0803 23:41:36.517222 1397775 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "no-preload-344284"
	  kubeletExtraArgs:
	    node-ip: 192.168.94.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0-rc.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0803 23:41:36.517299 1397775 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0-rc.0
	I0803 23:41:36.526881 1397775 binaries.go:44] Found k8s binaries, skipping transfer
	I0803 23:41:36.526951 1397775 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0803 23:41:36.536011 1397775 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (326 bytes)
	I0803 23:41:36.554369 1397775 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I0803 23:41:36.574335 1397775 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2176 bytes)
	I0803 23:41:36.599096 1397775 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I0803 23:41:36.603321 1397775 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0803 23:41:36.614355 1397775 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0803 23:41:36.725530 1397775 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0803 23:41:36.742825 1397775 certs.go:68] Setting up /home/jenkins/minikube-integration/19364-1180294/.minikube/profiles/no-preload-344284 for IP: 192.168.94.2
	I0803 23:41:36.742849 1397775 certs.go:194] generating shared ca certs ...
	I0803 23:41:36.742865 1397775 certs.go:226] acquiring lock for ca certs: {Name:mk245d61d460943c9f9c4518cc1e3561b25bafd3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0803 23:41:36.743003 1397775 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19364-1180294/.minikube/ca.key
	I0803 23:41:36.743054 1397775 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19364-1180294/.minikube/proxy-client-ca.key
	I0803 23:41:36.743066 1397775 certs.go:256] generating profile certs ...
	I0803 23:41:36.743149 1397775 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19364-1180294/.minikube/profiles/no-preload-344284/client.key
	I0803 23:41:36.743219 1397775 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19364-1180294/.minikube/profiles/no-preload-344284/apiserver.key.00bdfced
	I0803 23:41:36.743265 1397775 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19364-1180294/.minikube/profiles/no-preload-344284/proxy-client.key
	I0803 23:41:36.743395 1397775 certs.go:484] found cert: /home/jenkins/minikube-integration/19364-1180294/.minikube/certs/1185702.pem (1338 bytes)
	W0803 23:41:36.743429 1397775 certs.go:480] ignoring /home/jenkins/minikube-integration/19364-1180294/.minikube/certs/1185702_empty.pem, impossibly tiny 0 bytes
	I0803 23:41:36.743440 1397775 certs.go:484] found cert: /home/jenkins/minikube-integration/19364-1180294/.minikube/certs/ca-key.pem (1679 bytes)
	I0803 23:41:36.743463 1397775 certs.go:484] found cert: /home/jenkins/minikube-integration/19364-1180294/.minikube/certs/ca.pem (1078 bytes)
	I0803 23:41:36.743496 1397775 certs.go:484] found cert: /home/jenkins/minikube-integration/19364-1180294/.minikube/certs/cert.pem (1123 bytes)
	I0803 23:41:36.743527 1397775 certs.go:484] found cert: /home/jenkins/minikube-integration/19364-1180294/.minikube/certs/key.pem (1675 bytes)
	I0803 23:41:36.743580 1397775 certs.go:484] found cert: /home/jenkins/minikube-integration/19364-1180294/.minikube/files/etc/ssl/certs/11857022.pem (1708 bytes)
	I0803 23:41:36.749564 1397775 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-1180294/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0803 23:41:36.779854 1397775 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-1180294/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0803 23:41:36.803649 1397775 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-1180294/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0803 23:41:36.835574 1397775 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-1180294/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0803 23:41:36.873060 1397775 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-1180294/.minikube/profiles/no-preload-344284/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0803 23:41:36.900102 1397775 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-1180294/.minikube/profiles/no-preload-344284/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0803 23:41:36.932553 1397775 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-1180294/.minikube/profiles/no-preload-344284/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0803 23:41:36.961305 1397775 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-1180294/.minikube/profiles/no-preload-344284/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0803 23:41:36.989616 1397775 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-1180294/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0803 23:41:37.019318 1397775 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-1180294/.minikube/certs/1185702.pem --> /usr/share/ca-certificates/1185702.pem (1338 bytes)
	I0803 23:41:37.052136 1397775 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19364-1180294/.minikube/files/etc/ssl/certs/11857022.pem --> /usr/share/ca-certificates/11857022.pem (1708 bytes)
	I0803 23:41:37.082907 1397775 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0803 23:41:37.104306 1397775 ssh_runner.go:195] Run: openssl version
	I0803 23:41:37.110287 1397775 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0803 23:41:37.119683 1397775 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0803 23:41:37.123163 1397775 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug  3 22:49 /usr/share/ca-certificates/minikubeCA.pem
	I0803 23:41:37.123318 1397775 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0803 23:41:37.130440 1397775 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0803 23:41:37.139932 1397775 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1185702.pem && ln -fs /usr/share/ca-certificates/1185702.pem /etc/ssl/certs/1185702.pem"
	I0803 23:41:37.149644 1397775 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1185702.pem
	I0803 23:41:37.153380 1397775 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug  3 22:59 /usr/share/ca-certificates/1185702.pem
	I0803 23:41:37.153442 1397775 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1185702.pem
	I0803 23:41:37.161881 1397775 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1185702.pem /etc/ssl/certs/51391683.0"
	I0803 23:41:37.171234 1397775 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11857022.pem && ln -fs /usr/share/ca-certificates/11857022.pem /etc/ssl/certs/11857022.pem"
	I0803 23:41:37.181046 1397775 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11857022.pem
	I0803 23:41:37.184567 1397775 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug  3 22:59 /usr/share/ca-certificates/11857022.pem
	I0803 23:41:37.184641 1397775 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11857022.pem
	I0803 23:41:37.191789 1397775 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/11857022.pem /etc/ssl/certs/3ec20f2e.0"
	I0803 23:41:37.203174 1397775 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0803 23:41:37.206765 1397775 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0803 23:41:37.213449 1397775 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0803 23:41:37.220287 1397775 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0803 23:41:37.227439 1397775 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0803 23:41:37.235578 1397775 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0803 23:41:37.242764 1397775 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0803 23:41:37.249704 1397775 kubeadm.go:392] StartCluster: {Name:no-preload-344284 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0-rc.0 ClusterName:no-preload-344284 Namespace:default APIServerHAVIP: APIServerName:minikubeCA A
PIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.31.0-rc.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/ho
me/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0803 23:41:37.249821 1397775 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0803 23:41:37.249893 1397775 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0803 23:41:37.290248 1397775 cri.go:89] found id: "c8ab608f0f0b3cef83473fed4cadeaf2fa75d29147d9588d111c3dd05c6420fb"
	I0803 23:41:37.290271 1397775 cri.go:89] found id: "c651dcb9b888abf82205310dfcd2fac85d25c2d8bd5fb629e8abc81ab4e9b88b"
	I0803 23:41:37.290276 1397775 cri.go:89] found id: "f97b034bafbab284fcf04c080c4dca6faaf4433fd8fb8f9591434901b2dfa4b1"
	I0803 23:41:37.290280 1397775 cri.go:89] found id: "ba5bff2652a59b9315416523f574d7241a2c066402399fe571c43792e6659da1"
	I0803 23:41:37.290284 1397775 cri.go:89] found id: "58bf3fcfd955362372b9dc3bda5a40eb2c1bff6e1899d340167c03fcd69d02be"
	I0803 23:41:37.290288 1397775 cri.go:89] found id: "6ea351915fbb82c6946939d983bc2bfc19bef7234a801c8be1481d601b766ded"
	I0803 23:41:37.290291 1397775 cri.go:89] found id: "32717f7f03ce4464c728ed3920d8a0b51699fa39103d3291b7da29ab551de5b6"
	I0803 23:41:37.290294 1397775 cri.go:89] found id: "e18ab35ca669b7f9117c547e9dc835e66589b9a4f0710977544f486538867a8f"
	I0803 23:41:37.290300 1397775 cri.go:89] found id: ""
	I0803 23:41:37.290355 1397775 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I0803 23:41:37.319812 1397775 cri.go:116] JSON = null
	W0803 23:41:37.319865 1397775 kubeadm.go:399] unpause failed: list paused: list returned 0 containers, but ps returned 8
	I0803 23:41:37.319946 1397775 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0803 23:41:37.331313 1397775 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0803 23:41:37.331333 1397775 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0803 23:41:37.331386 1397775 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0803 23:41:37.348014 1397775 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0803 23:41:37.348656 1397775 kubeconfig.go:47] verify endpoint returned: get endpoint: "no-preload-344284" does not appear in /home/jenkins/minikube-integration/19364-1180294/kubeconfig
	I0803 23:41:37.348947 1397775 kubeconfig.go:62] /home/jenkins/minikube-integration/19364-1180294/kubeconfig needs updating (will repair): [kubeconfig missing "no-preload-344284" cluster setting kubeconfig missing "no-preload-344284" context setting]
	I0803 23:41:37.349491 1397775 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19364-1180294/kubeconfig: {Name:mk7ac442c13ee76103bb330a149278eea8a7c99f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0803 23:41:37.350874 1397775 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0803 23:41:37.363994 1397775 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.94.2
	I0803 23:41:37.364025 1397775 kubeadm.go:597] duration metric: took 32.685419ms to restartPrimaryControlPlane
	I0803 23:41:37.364035 1397775 kubeadm.go:394] duration metric: took 114.342702ms to StartCluster
	I0803 23:41:37.364050 1397775 settings.go:142] acquiring lock: {Name:mk6781ca2b0427afb2b67408884ede06d33d8dbf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0803 23:41:37.364114 1397775 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19364-1180294/kubeconfig
	I0803 23:41:37.365137 1397775 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19364-1180294/kubeconfig: {Name:mk7ac442c13ee76103bb330a149278eea8a7c99f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0803 23:41:37.365335 1397775 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.31.0-rc.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0803 23:41:37.365626 1397775 config.go:182] Loaded profile config "no-preload-344284": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.0-rc.0
	I0803 23:41:37.365668 1397775 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0803 23:41:37.365739 1397775 addons.go:69] Setting storage-provisioner=true in profile "no-preload-344284"
	I0803 23:41:37.365765 1397775 addons.go:234] Setting addon storage-provisioner=true in "no-preload-344284"
	W0803 23:41:37.365777 1397775 addons.go:243] addon storage-provisioner should already be in state true
	I0803 23:41:37.365843 1397775 host.go:66] Checking if "no-preload-344284" exists ...
	I0803 23:41:37.365795 1397775 addons.go:69] Setting default-storageclass=true in profile "no-preload-344284"
	I0803 23:41:37.366108 1397775 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-344284"
	I0803 23:41:37.366467 1397775 cli_runner.go:164] Run: docker container inspect no-preload-344284 --format={{.State.Status}}
	I0803 23:41:37.366550 1397775 cli_runner.go:164] Run: docker container inspect no-preload-344284 --format={{.State.Status}}
	I0803 23:41:37.365800 1397775 addons.go:69] Setting dashboard=true in profile "no-preload-344284"
	I0803 23:41:37.366944 1397775 addons.go:234] Setting addon dashboard=true in "no-preload-344284"
	W0803 23:41:37.366953 1397775 addons.go:243] addon dashboard should already be in state true
	I0803 23:41:37.366976 1397775 host.go:66] Checking if "no-preload-344284" exists ...
	I0803 23:41:37.367359 1397775 cli_runner.go:164] Run: docker container inspect no-preload-344284 --format={{.State.Status}}
	I0803 23:41:37.365805 1397775 addons.go:69] Setting metrics-server=true in profile "no-preload-344284"
	I0803 23:41:37.367689 1397775 addons.go:234] Setting addon metrics-server=true in "no-preload-344284"
	W0803 23:41:37.367713 1397775 addons.go:243] addon metrics-server should already be in state true
	I0803 23:41:37.367748 1397775 host.go:66] Checking if "no-preload-344284" exists ...
	I0803 23:41:37.368146 1397775 cli_runner.go:164] Run: docker container inspect no-preload-344284 --format={{.State.Status}}
	I0803 23:41:37.370308 1397775 out.go:177] * Verifying Kubernetes components...
	I0803 23:41:37.374138 1397775 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0803 23:41:37.421761 1397775 addons.go:234] Setting addon default-storageclass=true in "no-preload-344284"
	W0803 23:41:37.421783 1397775 addons.go:243] addon default-storageclass should already be in state true
	I0803 23:41:37.421808 1397775 host.go:66] Checking if "no-preload-344284" exists ...
	I0803 23:41:37.422200 1397775 cli_runner.go:164] Run: docker container inspect no-preload-344284 --format={{.State.Status}}
	I0803 23:41:37.439431 1397775 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0803 23:41:37.439562 1397775 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0803 23:41:37.441364 1397775 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0803 23:41:37.441409 1397775 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0803 23:41:37.441424 1397775 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0803 23:41:37.441486 1397775 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-344284
	I0803 23:41:37.443267 1397775 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0803 23:41:37.443294 1397775 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0803 23:41:37.443356 1397775 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-344284
	I0803 23:41:37.445294 1397775 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I0803 23:41:36.199366 1389119 pod_ready.go:102] pod "metrics-server-9975d5f86-wm57q" in "kube-system" namespace has status "Ready":"False"
	I0803 23:41:38.201693 1389119 pod_ready.go:102] pod "metrics-server-9975d5f86-wm57q" in "kube-system" namespace has status "Ready":"False"
	I0803 23:41:37.451202 1397775 addons.go:431] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0803 23:41:37.451253 1397775 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0803 23:41:37.451338 1397775 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-344284
	I0803 23:41:37.481564 1397775 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0803 23:41:37.481584 1397775 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0803 23:41:37.481651 1397775 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-344284
	I0803 23:41:37.518425 1397775 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34553 SSHKeyPath:/home/jenkins/minikube-integration/19364-1180294/.minikube/machines/no-preload-344284/id_rsa Username:docker}
	I0803 23:41:37.524186 1397775 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34553 SSHKeyPath:/home/jenkins/minikube-integration/19364-1180294/.minikube/machines/no-preload-344284/id_rsa Username:docker}
	I0803 23:41:37.536012 1397775 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34553 SSHKeyPath:/home/jenkins/minikube-integration/19364-1180294/.minikube/machines/no-preload-344284/id_rsa Username:docker}
	I0803 23:41:37.546318 1397775 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34553 SSHKeyPath:/home/jenkins/minikube-integration/19364-1180294/.minikube/machines/no-preload-344284/id_rsa Username:docker}
	I0803 23:41:37.619925 1397775 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0803 23:41:37.717297 1397775 node_ready.go:35] waiting up to 6m0s for node "no-preload-344284" to be "Ready" ...
	I0803 23:41:37.816578 1397775 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-rc.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0803 23:41:37.817891 1397775 addons.go:431] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0803 23:41:37.817910 1397775 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0803 23:41:37.848278 1397775 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-rc.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0803 23:41:37.891979 1397775 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0803 23:41:37.892056 1397775 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0803 23:41:37.952003 1397775 addons.go:431] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0803 23:41:37.952077 1397775 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0803 23:41:38.051939 1397775 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0803 23:41:38.052016 1397775 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0803 23:41:38.147384 1397775 addons.go:431] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0803 23:41:38.147455 1397775 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0803 23:41:38.166376 1397775 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0803 23:41:38.166456 1397775 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0803 23:41:38.294750 1397775 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-rc.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W0803 23:41:38.368325 1397775 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-rc.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0803 23:41:38.368423 1397775 retry.go:31] will retry after 180.77962ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-rc.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W0803 23:41:38.411327 1397775 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-rc.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0803 23:41:38.411405 1397775 retry.go:31] will retry after 239.796183ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-rc.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0803 23:41:38.499051 1397775 addons.go:431] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0803 23:41:38.499115 1397775 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0803 23:41:38.531952 1397775 addons.go:431] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0803 23:41:38.532028 1397775 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0803 23:41:38.549563 1397775 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-rc.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0803 23:41:38.555317 1397775 addons.go:431] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0803 23:41:38.555391 1397775 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0803 23:41:38.633560 1397775 addons.go:431] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0803 23:41:38.633635 1397775 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0803 23:41:38.651586 1397775 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-rc.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0803 23:41:38.689944 1397775 addons.go:431] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0803 23:41:38.690014 1397775 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0803 23:41:38.886687 1397775 addons.go:431] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0803 23:41:38.886760 1397775 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0803 23:41:38.982540 1397775 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-rc.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0803 23:41:40.703129 1389119 pod_ready.go:102] pod "metrics-server-9975d5f86-wm57q" in "kube-system" namespace has status "Ready":"False"
	I0803 23:41:43.199000 1389119 pod_ready.go:102] pod "metrics-server-9975d5f86-wm57q" in "kube-system" namespace has status "Ready":"False"
	I0803 23:41:42.483736 1397775 node_ready.go:49] node "no-preload-344284" has status "Ready":"True"
	I0803 23:41:42.483760 1397775 node_ready.go:38] duration metric: took 4.766389062s for node "no-preload-344284" to be "Ready" ...
	I0803 23:41:42.483769 1397775 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0803 23:41:42.531725 1397775 pod_ready.go:78] waiting up to 6m0s for pod "coredns-6f6b679f8f-2cdcf" in "kube-system" namespace to be "Ready" ...
	I0803 23:41:42.563986 1397775 pod_ready.go:92] pod "coredns-6f6b679f8f-2cdcf" in "kube-system" namespace has status "Ready":"True"
	I0803 23:41:42.564060 1397775 pod_ready.go:81] duration metric: took 32.23121ms for pod "coredns-6f6b679f8f-2cdcf" in "kube-system" namespace to be "Ready" ...
	I0803 23:41:42.564096 1397775 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-344284" in "kube-system" namespace to be "Ready" ...
	I0803 23:41:42.574922 1397775 pod_ready.go:92] pod "etcd-no-preload-344284" in "kube-system" namespace has status "Ready":"True"
	I0803 23:41:42.574994 1397775 pod_ready.go:81] duration metric: took 10.86657ms for pod "etcd-no-preload-344284" in "kube-system" namespace to be "Ready" ...
	I0803 23:41:42.575022 1397775 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-344284" in "kube-system" namespace to be "Ready" ...
	I0803 23:41:42.580390 1397775 pod_ready.go:92] pod "kube-apiserver-no-preload-344284" in "kube-system" namespace has status "Ready":"True"
	I0803 23:41:42.580461 1397775 pod_ready.go:81] duration metric: took 5.417002ms for pod "kube-apiserver-no-preload-344284" in "kube-system" namespace to be "Ready" ...
	I0803 23:41:42.580489 1397775 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-344284" in "kube-system" namespace to be "Ready" ...
	I0803 23:41:42.586101 1397775 pod_ready.go:92] pod "kube-controller-manager-no-preload-344284" in "kube-system" namespace has status "Ready":"True"
	I0803 23:41:42.586164 1397775 pod_ready.go:81] duration metric: took 5.653047ms for pod "kube-controller-manager-no-preload-344284" in "kube-system" namespace to be "Ready" ...
	I0803 23:41:42.586201 1397775 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-sr8w2" in "kube-system" namespace to be "Ready" ...
	I0803 23:41:42.688020 1397775 pod_ready.go:92] pod "kube-proxy-sr8w2" in "kube-system" namespace has status "Ready":"True"
	I0803 23:41:42.688088 1397775 pod_ready.go:81] duration metric: took 101.866283ms for pod "kube-proxy-sr8w2" in "kube-system" namespace to be "Ready" ...
	I0803 23:41:42.688126 1397775 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-344284" in "kube-system" namespace to be "Ready" ...
	I0803 23:41:43.097525 1397775 pod_ready.go:92] pod "kube-scheduler-no-preload-344284" in "kube-system" namespace has status "Ready":"True"
	I0803 23:41:43.097625 1397775 pod_ready.go:81] duration metric: took 409.470902ms for pod "kube-scheduler-no-preload-344284" in "kube-system" namespace to be "Ready" ...
	I0803 23:41:43.097653 1397775 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-6867b74b74-zs9xw" in "kube-system" namespace to be "Ready" ...
	I0803 23:41:45.134875 1397775 pod_ready.go:102] pod "metrics-server-6867b74b74-zs9xw" in "kube-system" namespace has status "Ready":"False"
	I0803 23:41:45.454807 1397775 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-rc.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: (6.803083704s)
	I0803 23:41:45.455112 1397775 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-rc.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: (6.904973298s)
	I0803 23:41:45.455363 1397775 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-rc.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (7.160531519s)
	I0803 23:41:45.455494 1397775 addons.go:475] Verifying addon metrics-server=true in "no-preload-344284"
	I0803 23:41:45.600639 1397775 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-rc.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (6.618007079s)
	I0803 23:41:45.603990 1397775 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p no-preload-344284 addons enable metrics-server
	
	I0803 23:41:45.606861 1397775 out.go:177] * Enabled addons: storage-provisioner, metrics-server, default-storageclass, dashboard
	I0803 23:41:45.200504 1389119 pod_ready.go:102] pod "metrics-server-9975d5f86-wm57q" in "kube-system" namespace has status "Ready":"False"
	I0803 23:41:47.699001 1389119 pod_ready.go:102] pod "metrics-server-9975d5f86-wm57q" in "kube-system" namespace has status "Ready":"False"
	I0803 23:41:45.608713 1397775 addons.go:510] duration metric: took 8.243040413s for enable addons: enabled=[storage-provisioner metrics-server default-storageclass dashboard]
	I0803 23:41:47.603017 1397775 pod_ready.go:102] pod "metrics-server-6867b74b74-zs9xw" in "kube-system" namespace has status "Ready":"False"
	I0803 23:41:49.699340 1389119 pod_ready.go:102] pod "metrics-server-9975d5f86-wm57q" in "kube-system" namespace has status "Ready":"False"
	I0803 23:41:51.699432 1389119 pod_ready.go:102] pod "metrics-server-9975d5f86-wm57q" in "kube-system" namespace has status "Ready":"False"
	I0803 23:41:53.700360 1389119 pod_ready.go:102] pod "metrics-server-9975d5f86-wm57q" in "kube-system" namespace has status "Ready":"False"
	I0803 23:41:49.604301 1397775 pod_ready.go:102] pod "metrics-server-6867b74b74-zs9xw" in "kube-system" namespace has status "Ready":"False"
	I0803 23:41:52.108215 1397775 pod_ready.go:102] pod "metrics-server-6867b74b74-zs9xw" in "kube-system" namespace has status "Ready":"False"
	I0803 23:41:55.700581 1389119 pod_ready.go:102] pod "metrics-server-9975d5f86-wm57q" in "kube-system" namespace has status "Ready":"False"
	I0803 23:41:58.198268 1389119 pod_ready.go:102] pod "metrics-server-9975d5f86-wm57q" in "kube-system" namespace has status "Ready":"False"
	I0803 23:41:54.604128 1397775 pod_ready.go:102] pod "metrics-server-6867b74b74-zs9xw" in "kube-system" namespace has status "Ready":"False"
	I0803 23:41:57.104831 1397775 pod_ready.go:102] pod "metrics-server-6867b74b74-zs9xw" in "kube-system" namespace has status "Ready":"False"
	I0803 23:42:00.224981 1389119 pod_ready.go:102] pod "metrics-server-9975d5f86-wm57q" in "kube-system" namespace has status "Ready":"False"
	I0803 23:42:02.698424 1389119 pod_ready.go:102] pod "metrics-server-9975d5f86-wm57q" in "kube-system" namespace has status "Ready":"False"
	I0803 23:41:59.603620 1397775 pod_ready.go:102] pod "metrics-server-6867b74b74-zs9xw" in "kube-system" namespace has status "Ready":"False"
	I0803 23:42:01.605955 1397775 pod_ready.go:102] pod "metrics-server-6867b74b74-zs9xw" in "kube-system" namespace has status "Ready":"False"
	I0803 23:42:04.104574 1397775 pod_ready.go:102] pod "metrics-server-6867b74b74-zs9xw" in "kube-system" namespace has status "Ready":"False"
	I0803 23:42:04.699134 1389119 pod_ready.go:102] pod "metrics-server-9975d5f86-wm57q" in "kube-system" namespace has status "Ready":"False"
	I0803 23:42:07.198826 1389119 pod_ready.go:102] pod "metrics-server-9975d5f86-wm57q" in "kube-system" namespace has status "Ready":"False"
	I0803 23:42:06.105068 1397775 pod_ready.go:102] pod "metrics-server-6867b74b74-zs9xw" in "kube-system" namespace has status "Ready":"False"
	I0803 23:42:08.604638 1397775 pod_ready.go:102] pod "metrics-server-6867b74b74-zs9xw" in "kube-system" namespace has status "Ready":"False"
	I0803 23:42:09.698037 1389119 pod_ready.go:102] pod "metrics-server-9975d5f86-wm57q" in "kube-system" namespace has status "Ready":"False"
	I0803 23:42:11.699478 1389119 pod_ready.go:102] pod "metrics-server-9975d5f86-wm57q" in "kube-system" namespace has status "Ready":"False"
	I0803 23:42:11.103749 1397775 pod_ready.go:102] pod "metrics-server-6867b74b74-zs9xw" in "kube-system" namespace has status "Ready":"False"
	I0803 23:42:13.603435 1397775 pod_ready.go:102] pod "metrics-server-6867b74b74-zs9xw" in "kube-system" namespace has status "Ready":"False"
	I0803 23:42:14.199026 1389119 pod_ready.go:102] pod "metrics-server-9975d5f86-wm57q" in "kube-system" namespace has status "Ready":"False"
	I0803 23:42:16.699028 1389119 pod_ready.go:102] pod "metrics-server-9975d5f86-wm57q" in "kube-system" namespace has status "Ready":"False"
	I0803 23:42:18.699474 1389119 pod_ready.go:102] pod "metrics-server-9975d5f86-wm57q" in "kube-system" namespace has status "Ready":"False"
	I0803 23:42:16.103610 1397775 pod_ready.go:102] pod "metrics-server-6867b74b74-zs9xw" in "kube-system" namespace has status "Ready":"False"
	I0803 23:42:18.104144 1397775 pod_ready.go:102] pod "metrics-server-6867b74b74-zs9xw" in "kube-system" namespace has status "Ready":"False"
	I0803 23:42:21.199796 1389119 pod_ready.go:102] pod "metrics-server-9975d5f86-wm57q" in "kube-system" namespace has status "Ready":"False"
	I0803 23:42:23.701490 1389119 pod_ready.go:102] pod "metrics-server-9975d5f86-wm57q" in "kube-system" namespace has status "Ready":"False"
	I0803 23:42:20.105168 1397775 pod_ready.go:102] pod "metrics-server-6867b74b74-zs9xw" in "kube-system" namespace has status "Ready":"False"
	I0803 23:42:22.603837 1397775 pod_ready.go:102] pod "metrics-server-6867b74b74-zs9xw" in "kube-system" namespace has status "Ready":"False"
	I0803 23:42:26.198342 1389119 pod_ready.go:102] pod "metrics-server-9975d5f86-wm57q" in "kube-system" namespace has status "Ready":"False"
	I0803 23:42:28.199086 1389119 pod_ready.go:102] pod "metrics-server-9975d5f86-wm57q" in "kube-system" namespace has status "Ready":"False"
	I0803 23:42:24.604189 1397775 pod_ready.go:102] pod "metrics-server-6867b74b74-zs9xw" in "kube-system" namespace has status "Ready":"False"
	I0803 23:42:27.104353 1397775 pod_ready.go:102] pod "metrics-server-6867b74b74-zs9xw" in "kube-system" namespace has status "Ready":"False"
	I0803 23:42:30.200328 1389119 pod_ready.go:102] pod "metrics-server-9975d5f86-wm57q" in "kube-system" namespace has status "Ready":"False"
	I0803 23:42:32.699536 1389119 pod_ready.go:102] pod "metrics-server-9975d5f86-wm57q" in "kube-system" namespace has status "Ready":"False"
	I0803 23:42:29.603272 1397775 pod_ready.go:102] pod "metrics-server-6867b74b74-zs9xw" in "kube-system" namespace has status "Ready":"False"
	I0803 23:42:31.604275 1397775 pod_ready.go:102] pod "metrics-server-6867b74b74-zs9xw" in "kube-system" namespace has status "Ready":"False"
	I0803 23:42:33.605016 1397775 pod_ready.go:102] pod "metrics-server-6867b74b74-zs9xw" in "kube-system" namespace has status "Ready":"False"
	I0803 23:42:35.199476 1389119 pod_ready.go:102] pod "metrics-server-9975d5f86-wm57q" in "kube-system" namespace has status "Ready":"False"
	I0803 23:42:37.698937 1389119 pod_ready.go:102] pod "metrics-server-9975d5f86-wm57q" in "kube-system" namespace has status "Ready":"False"
	I0803 23:42:36.104151 1397775 pod_ready.go:102] pod "metrics-server-6867b74b74-zs9xw" in "kube-system" namespace has status "Ready":"False"
	I0803 23:42:38.104409 1397775 pod_ready.go:102] pod "metrics-server-6867b74b74-zs9xw" in "kube-system" namespace has status "Ready":"False"
	I0803 23:42:40.198984 1389119 pod_ready.go:102] pod "metrics-server-9975d5f86-wm57q" in "kube-system" namespace has status "Ready":"False"
	I0803 23:42:42.204378 1389119 pod_ready.go:102] pod "metrics-server-9975d5f86-wm57q" in "kube-system" namespace has status "Ready":"False"
	I0803 23:42:40.105794 1397775 pod_ready.go:102] pod "metrics-server-6867b74b74-zs9xw" in "kube-system" namespace has status "Ready":"False"
	I0803 23:42:42.604390 1397775 pod_ready.go:102] pod "metrics-server-6867b74b74-zs9xw" in "kube-system" namespace has status "Ready":"False"
	I0803 23:42:44.698855 1389119 pod_ready.go:102] pod "metrics-server-9975d5f86-wm57q" in "kube-system" namespace has status "Ready":"False"
	I0803 23:42:46.699179 1389119 pod_ready.go:102] pod "metrics-server-9975d5f86-wm57q" in "kube-system" namespace has status "Ready":"False"
	I0803 23:42:48.702669 1389119 pod_ready.go:102] pod "metrics-server-9975d5f86-wm57q" in "kube-system" namespace has status "Ready":"False"
	I0803 23:42:44.604562 1397775 pod_ready.go:102] pod "metrics-server-6867b74b74-zs9xw" in "kube-system" namespace has status "Ready":"False"
	I0803 23:42:47.104824 1397775 pod_ready.go:102] pod "metrics-server-6867b74b74-zs9xw" in "kube-system" namespace has status "Ready":"False"
	I0803 23:42:51.199902 1389119 pod_ready.go:102] pod "metrics-server-9975d5f86-wm57q" in "kube-system" namespace has status "Ready":"False"
	I0803 23:42:53.200521 1389119 pod_ready.go:102] pod "metrics-server-9975d5f86-wm57q" in "kube-system" namespace has status "Ready":"False"
	I0803 23:42:49.604470 1397775 pod_ready.go:102] pod "metrics-server-6867b74b74-zs9xw" in "kube-system" namespace has status "Ready":"False"
	I0803 23:42:52.104645 1397775 pod_ready.go:102] pod "metrics-server-6867b74b74-zs9xw" in "kube-system" namespace has status "Ready":"False"
	I0803 23:42:55.698645 1389119 pod_ready.go:102] pod "metrics-server-9975d5f86-wm57q" in "kube-system" namespace has status "Ready":"False"
	I0803 23:42:57.703669 1389119 pod_ready.go:102] pod "metrics-server-9975d5f86-wm57q" in "kube-system" namespace has status "Ready":"False"
	I0803 23:42:54.604511 1397775 pod_ready.go:102] pod "metrics-server-6867b74b74-zs9xw" in "kube-system" namespace has status "Ready":"False"
	I0803 23:42:57.103410 1397775 pod_ready.go:102] pod "metrics-server-6867b74b74-zs9xw" in "kube-system" namespace has status "Ready":"False"
	I0803 23:42:59.103783 1397775 pod_ready.go:102] pod "metrics-server-6867b74b74-zs9xw" in "kube-system" namespace has status "Ready":"False"
	I0803 23:43:00.218059 1389119 pod_ready.go:102] pod "metrics-server-9975d5f86-wm57q" in "kube-system" namespace has status "Ready":"False"
	I0803 23:43:02.698937 1389119 pod_ready.go:102] pod "metrics-server-9975d5f86-wm57q" in "kube-system" namespace has status "Ready":"False"
	I0803 23:43:01.603604 1397775 pod_ready.go:102] pod "metrics-server-6867b74b74-zs9xw" in "kube-system" namespace has status "Ready":"False"
	I0803 23:43:04.104366 1397775 pod_ready.go:102] pod "metrics-server-6867b74b74-zs9xw" in "kube-system" namespace has status "Ready":"False"
	I0803 23:43:05.198988 1389119 pod_ready.go:102] pod "metrics-server-9975d5f86-wm57q" in "kube-system" namespace has status "Ready":"False"
	I0803 23:43:07.698134 1389119 pod_ready.go:102] pod "metrics-server-9975d5f86-wm57q" in "kube-system" namespace has status "Ready":"False"
	I0803 23:43:06.104525 1397775 pod_ready.go:102] pod "metrics-server-6867b74b74-zs9xw" in "kube-system" namespace has status "Ready":"False"
	I0803 23:43:08.603484 1397775 pod_ready.go:102] pod "metrics-server-6867b74b74-zs9xw" in "kube-system" namespace has status "Ready":"False"
	I0803 23:43:09.700180 1389119 pod_ready.go:102] pod "metrics-server-9975d5f86-wm57q" in "kube-system" namespace has status "Ready":"False"
	I0803 23:43:12.223904 1389119 pod_ready.go:102] pod "metrics-server-9975d5f86-wm57q" in "kube-system" namespace has status "Ready":"False"
	I0803 23:43:11.103902 1397775 pod_ready.go:102] pod "metrics-server-6867b74b74-zs9xw" in "kube-system" namespace has status "Ready":"False"
	I0803 23:43:13.105972 1397775 pod_ready.go:102] pod "metrics-server-6867b74b74-zs9xw" in "kube-system" namespace has status "Ready":"False"
	I0803 23:43:14.698373 1389119 pod_ready.go:102] pod "metrics-server-9975d5f86-wm57q" in "kube-system" namespace has status "Ready":"False"
	I0803 23:43:16.698784 1389119 pod_ready.go:102] pod "metrics-server-9975d5f86-wm57q" in "kube-system" namespace has status "Ready":"False"
	I0803 23:43:18.699000 1389119 pod_ready.go:102] pod "metrics-server-9975d5f86-wm57q" in "kube-system" namespace has status "Ready":"False"
	I0803 23:43:15.603982 1397775 pod_ready.go:102] pod "metrics-server-6867b74b74-zs9xw" in "kube-system" namespace has status "Ready":"False"
	I0803 23:43:18.104240 1397775 pod_ready.go:102] pod "metrics-server-6867b74b74-zs9xw" in "kube-system" namespace has status "Ready":"False"
	I0803 23:43:21.198740 1389119 pod_ready.go:102] pod "metrics-server-9975d5f86-wm57q" in "kube-system" namespace has status "Ready":"False"
	I0803 23:43:23.199428 1389119 pod_ready.go:102] pod "metrics-server-9975d5f86-wm57q" in "kube-system" namespace has status "Ready":"False"
	I0803 23:43:20.104961 1397775 pod_ready.go:102] pod "metrics-server-6867b74b74-zs9xw" in "kube-system" namespace has status "Ready":"False"
	I0803 23:43:22.603655 1397775 pod_ready.go:102] pod "metrics-server-6867b74b74-zs9xw" in "kube-system" namespace has status "Ready":"False"
	I0803 23:43:25.199512 1389119 pod_ready.go:102] pod "metrics-server-9975d5f86-wm57q" in "kube-system" namespace has status "Ready":"False"
	I0803 23:43:27.698426 1389119 pod_ready.go:102] pod "metrics-server-9975d5f86-wm57q" in "kube-system" namespace has status "Ready":"False"
	I0803 23:43:24.604112 1397775 pod_ready.go:102] pod "metrics-server-6867b74b74-zs9xw" in "kube-system" namespace has status "Ready":"False"
	I0803 23:43:27.103652 1397775 pod_ready.go:102] pod "metrics-server-6867b74b74-zs9xw" in "kube-system" namespace has status "Ready":"False"
	I0803 23:43:29.103713 1397775 pod_ready.go:102] pod "metrics-server-6867b74b74-zs9xw" in "kube-system" namespace has status "Ready":"False"
	I0803 23:43:30.200553 1389119 pod_ready.go:102] pod "metrics-server-9975d5f86-wm57q" in "kube-system" namespace has status "Ready":"False"
	I0803 23:43:32.699164 1389119 pod_ready.go:102] pod "metrics-server-9975d5f86-wm57q" in "kube-system" namespace has status "Ready":"False"
	I0803 23:43:31.604372 1397775 pod_ready.go:102] pod "metrics-server-6867b74b74-zs9xw" in "kube-system" namespace has status "Ready":"False"
	I0803 23:43:34.104257 1397775 pod_ready.go:102] pod "metrics-server-6867b74b74-zs9xw" in "kube-system" namespace has status "Ready":"False"
	I0803 23:43:35.198680 1389119 pod_ready.go:102] pod "metrics-server-9975d5f86-wm57q" in "kube-system" namespace has status "Ready":"False"
	I0803 23:43:37.199039 1389119 pod_ready.go:102] pod "metrics-server-9975d5f86-wm57q" in "kube-system" namespace has status "Ready":"False"
	I0803 23:43:36.604328 1397775 pod_ready.go:102] pod "metrics-server-6867b74b74-zs9xw" in "kube-system" namespace has status "Ready":"False"
	I0803 23:43:39.104116 1397775 pod_ready.go:102] pod "metrics-server-6867b74b74-zs9xw" in "kube-system" namespace has status "Ready":"False"
	I0803 23:43:39.199410 1389119 pod_ready.go:102] pod "metrics-server-9975d5f86-wm57q" in "kube-system" namespace has status "Ready":"False"
	I0803 23:43:41.698052 1389119 pod_ready.go:102] pod "metrics-server-9975d5f86-wm57q" in "kube-system" namespace has status "Ready":"False"
	I0803 23:43:43.698689 1389119 pod_ready.go:102] pod "metrics-server-9975d5f86-wm57q" in "kube-system" namespace has status "Ready":"False"
	I0803 23:43:41.603157 1397775 pod_ready.go:102] pod "metrics-server-6867b74b74-zs9xw" in "kube-system" namespace has status "Ready":"False"
	I0803 23:43:44.103310 1397775 pod_ready.go:102] pod "metrics-server-6867b74b74-zs9xw" in "kube-system" namespace has status "Ready":"False"
	I0803 23:43:46.203672 1389119 pod_ready.go:102] pod "metrics-server-9975d5f86-wm57q" in "kube-system" namespace has status "Ready":"False"
	I0803 23:43:48.699366 1389119 pod_ready.go:102] pod "metrics-server-9975d5f86-wm57q" in "kube-system" namespace has status "Ready":"False"
	I0803 23:43:46.603628 1397775 pod_ready.go:102] pod "metrics-server-6867b74b74-zs9xw" in "kube-system" namespace has status "Ready":"False"
	I0803 23:43:48.604072 1397775 pod_ready.go:102] pod "metrics-server-6867b74b74-zs9xw" in "kube-system" namespace has status "Ready":"False"
	I0803 23:43:51.198826 1389119 pod_ready.go:102] pod "metrics-server-9975d5f86-wm57q" in "kube-system" namespace has status "Ready":"False"
	I0803 23:43:53.698907 1389119 pod_ready.go:102] pod "metrics-server-9975d5f86-wm57q" in "kube-system" namespace has status "Ready":"False"
	I0803 23:43:51.104358 1397775 pod_ready.go:102] pod "metrics-server-6867b74b74-zs9xw" in "kube-system" namespace has status "Ready":"False"
	I0803 23:43:53.603682 1397775 pod_ready.go:102] pod "metrics-server-6867b74b74-zs9xw" in "kube-system" namespace has status "Ready":"False"
	I0803 23:43:56.199013 1389119 pod_ready.go:102] pod "metrics-server-9975d5f86-wm57q" in "kube-system" namespace has status "Ready":"False"
	I0803 23:43:58.698537 1389119 pod_ready.go:102] pod "metrics-server-9975d5f86-wm57q" in "kube-system" namespace has status "Ready":"False"
	I0803 23:43:56.103672 1397775 pod_ready.go:102] pod "metrics-server-6867b74b74-zs9xw" in "kube-system" namespace has status "Ready":"False"
	I0803 23:43:58.104429 1397775 pod_ready.go:102] pod "metrics-server-6867b74b74-zs9xw" in "kube-system" namespace has status "Ready":"False"
	I0803 23:44:00.699403 1389119 pod_ready.go:102] pod "metrics-server-9975d5f86-wm57q" in "kube-system" namespace has status "Ready":"False"
	I0803 23:44:02.705402 1389119 pod_ready.go:102] pod "metrics-server-9975d5f86-wm57q" in "kube-system" namespace has status "Ready":"False"
	I0803 23:44:00.106805 1397775 pod_ready.go:102] pod "metrics-server-6867b74b74-zs9xw" in "kube-system" namespace has status "Ready":"False"
	I0803 23:44:02.605546 1397775 pod_ready.go:102] pod "metrics-server-6867b74b74-zs9xw" in "kube-system" namespace has status "Ready":"False"
	I0803 23:44:05.198904 1389119 pod_ready.go:102] pod "metrics-server-9975d5f86-wm57q" in "kube-system" namespace has status "Ready":"False"
	I0803 23:44:07.700261 1389119 pod_ready.go:102] pod "metrics-server-9975d5f86-wm57q" in "kube-system" namespace has status "Ready":"False"
	I0803 23:44:05.105155 1397775 pod_ready.go:102] pod "metrics-server-6867b74b74-zs9xw" in "kube-system" namespace has status "Ready":"False"
	I0803 23:44:07.603509 1397775 pod_ready.go:102] pod "metrics-server-6867b74b74-zs9xw" in "kube-system" namespace has status "Ready":"False"
	I0803 23:44:10.198829 1389119 pod_ready.go:102] pod "metrics-server-9975d5f86-wm57q" in "kube-system" namespace has status "Ready":"False"
	I0803 23:44:12.199083 1389119 pod_ready.go:102] pod "metrics-server-9975d5f86-wm57q" in "kube-system" namespace has status "Ready":"False"
	I0803 23:44:09.603941 1397775 pod_ready.go:102] pod "metrics-server-6867b74b74-zs9xw" in "kube-system" namespace has status "Ready":"False"
	I0803 23:44:11.604100 1397775 pod_ready.go:102] pod "metrics-server-6867b74b74-zs9xw" in "kube-system" namespace has status "Ready":"False"
	I0803 23:44:14.104316 1397775 pod_ready.go:102] pod "metrics-server-6867b74b74-zs9xw" in "kube-system" namespace has status "Ready":"False"
	I0803 23:44:14.199252 1389119 pod_ready.go:102] pod "metrics-server-9975d5f86-wm57q" in "kube-system" namespace has status "Ready":"False"
	I0803 23:44:16.698753 1389119 pod_ready.go:102] pod "metrics-server-9975d5f86-wm57q" in "kube-system" namespace has status "Ready":"False"
	I0803 23:44:16.603988 1397775 pod_ready.go:102] pod "metrics-server-6867b74b74-zs9xw" in "kube-system" namespace has status "Ready":"False"
	I0803 23:44:18.604133 1397775 pod_ready.go:102] pod "metrics-server-6867b74b74-zs9xw" in "kube-system" namespace has status "Ready":"False"
	I0803 23:44:19.199876 1389119 pod_ready.go:102] pod "metrics-server-9975d5f86-wm57q" in "kube-system" namespace has status "Ready":"False"
	I0803 23:44:21.698499 1389119 pod_ready.go:102] pod "metrics-server-9975d5f86-wm57q" in "kube-system" namespace has status "Ready":"False"
	I0803 23:44:23.698553 1389119 pod_ready.go:102] pod "metrics-server-9975d5f86-wm57q" in "kube-system" namespace has status "Ready":"False"
	I0803 23:44:21.104632 1397775 pod_ready.go:102] pod "metrics-server-6867b74b74-zs9xw" in "kube-system" namespace has status "Ready":"False"
	I0803 23:44:23.604008 1397775 pod_ready.go:102] pod "metrics-server-6867b74b74-zs9xw" in "kube-system" namespace has status "Ready":"False"
	I0803 23:44:26.200542 1389119 pod_ready.go:102] pod "metrics-server-9975d5f86-wm57q" in "kube-system" namespace has status "Ready":"False"
	I0803 23:44:28.698447 1389119 pod_ready.go:102] pod "metrics-server-9975d5f86-wm57q" in "kube-system" namespace has status "Ready":"False"
	I0803 23:44:25.604303 1397775 pod_ready.go:102] pod "metrics-server-6867b74b74-zs9xw" in "kube-system" namespace has status "Ready":"False"
	I0803 23:44:28.103412 1397775 pod_ready.go:102] pod "metrics-server-6867b74b74-zs9xw" in "kube-system" namespace has status "Ready":"False"
	I0803 23:44:30.698685 1389119 pod_ready.go:102] pod "metrics-server-9975d5f86-wm57q" in "kube-system" namespace has status "Ready":"False"
	I0803 23:44:32.698718 1389119 pod_ready.go:102] pod "metrics-server-9975d5f86-wm57q" in "kube-system" namespace has status "Ready":"False"
	I0803 23:44:30.106884 1397775 pod_ready.go:102] pod "metrics-server-6867b74b74-zs9xw" in "kube-system" namespace has status "Ready":"False"
	I0803 23:44:32.603005 1397775 pod_ready.go:102] pod "metrics-server-6867b74b74-zs9xw" in "kube-system" namespace has status "Ready":"False"
	I0803 23:44:34.699092 1389119 pod_ready.go:102] pod "metrics-server-9975d5f86-wm57q" in "kube-system" namespace has status "Ready":"False"
	I0803 23:44:37.198328 1389119 pod_ready.go:102] pod "metrics-server-9975d5f86-wm57q" in "kube-system" namespace has status "Ready":"False"
	I0803 23:44:34.603365 1397775 pod_ready.go:102] pod "metrics-server-6867b74b74-zs9xw" in "kube-system" namespace has status "Ready":"False"
	I0803 23:44:36.603973 1397775 pod_ready.go:102] pod "metrics-server-6867b74b74-zs9xw" in "kube-system" namespace has status "Ready":"False"
	I0803 23:44:39.104236 1397775 pod_ready.go:102] pod "metrics-server-6867b74b74-zs9xw" in "kube-system" namespace has status "Ready":"False"
	I0803 23:44:39.199068 1389119 pod_ready.go:102] pod "metrics-server-9975d5f86-wm57q" in "kube-system" namespace has status "Ready":"False"
	I0803 23:44:41.698264 1389119 pod_ready.go:102] pod "metrics-server-9975d5f86-wm57q" in "kube-system" namespace has status "Ready":"False"
	I0803 23:44:43.699121 1389119 pod_ready.go:102] pod "metrics-server-9975d5f86-wm57q" in "kube-system" namespace has status "Ready":"False"
	I0803 23:44:41.603461 1397775 pod_ready.go:102] pod "metrics-server-6867b74b74-zs9xw" in "kube-system" namespace has status "Ready":"False"
	I0803 23:44:43.604016 1397775 pod_ready.go:102] pod "metrics-server-6867b74b74-zs9xw" in "kube-system" namespace has status "Ready":"False"
	I0803 23:44:46.199245 1389119 pod_ready.go:102] pod "metrics-server-9975d5f86-wm57q" in "kube-system" namespace has status "Ready":"False"
	I0803 23:44:48.699142 1389119 pod_ready.go:102] pod "metrics-server-9975d5f86-wm57q" in "kube-system" namespace has status "Ready":"False"
	I0803 23:44:46.104141 1397775 pod_ready.go:102] pod "metrics-server-6867b74b74-zs9xw" in "kube-system" namespace has status "Ready":"False"
	I0803 23:44:48.105078 1397775 pod_ready.go:102] pod "metrics-server-6867b74b74-zs9xw" in "kube-system" namespace has status "Ready":"False"
	I0803 23:44:51.198567 1389119 pod_ready.go:102] pod "metrics-server-9975d5f86-wm57q" in "kube-system" namespace has status "Ready":"False"
	I0803 23:44:53.698856 1389119 pod_ready.go:102] pod "metrics-server-9975d5f86-wm57q" in "kube-system" namespace has status "Ready":"False"
	I0803 23:44:50.604019 1397775 pod_ready.go:102] pod "metrics-server-6867b74b74-zs9xw" in "kube-system" namespace has status "Ready":"False"
	I0803 23:44:53.105487 1397775 pod_ready.go:102] pod "metrics-server-6867b74b74-zs9xw" in "kube-system" namespace has status "Ready":"False"
	I0803 23:44:56.199533 1389119 pod_ready.go:102] pod "metrics-server-9975d5f86-wm57q" in "kube-system" namespace has status "Ready":"False"
	I0803 23:44:58.697721 1389119 pod_ready.go:102] pod "metrics-server-9975d5f86-wm57q" in "kube-system" namespace has status "Ready":"False"
	I0803 23:44:55.603521 1397775 pod_ready.go:102] pod "metrics-server-6867b74b74-zs9xw" in "kube-system" namespace has status "Ready":"False"
	I0803 23:44:58.104352 1397775 pod_ready.go:102] pod "metrics-server-6867b74b74-zs9xw" in "kube-system" namespace has status "Ready":"False"
	I0803 23:45:00.699068 1389119 pod_ready.go:102] pod "metrics-server-9975d5f86-wm57q" in "kube-system" namespace has status "Ready":"False"
	I0803 23:45:02.700391 1389119 pod_ready.go:102] pod "metrics-server-9975d5f86-wm57q" in "kube-system" namespace has status "Ready":"False"
	I0803 23:45:00.176775 1397775 pod_ready.go:102] pod "metrics-server-6867b74b74-zs9xw" in "kube-system" namespace has status "Ready":"False"
	I0803 23:45:02.603779 1397775 pod_ready.go:102] pod "metrics-server-6867b74b74-zs9xw" in "kube-system" namespace has status "Ready":"False"
	I0803 23:45:05.198810 1389119 pod_ready.go:102] pod "metrics-server-9975d5f86-wm57q" in "kube-system" namespace has status "Ready":"False"
	I0803 23:45:06.699353 1389119 pod_ready.go:81] duration metric: took 4m0.006793463s for pod "metrics-server-9975d5f86-wm57q" in "kube-system" namespace to be "Ready" ...
	E0803 23:45:06.699379 1389119 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0803 23:45:06.699393 1389119 pod_ready.go:38] duration metric: took 5m26.143754509s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0803 23:45:06.699407 1389119 api_server.go:52] waiting for apiserver process to appear ...
	I0803 23:45:06.699437 1389119 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0803 23:45:06.699508 1389119 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0803 23:45:06.762175 1389119 cri.go:89] found id: "2dfe55d1797db163cd9199214e216cbd925e66788ac3213f4189fd6d49eb137d"
	I0803 23:45:06.762196 1389119 cri.go:89] found id: "fb5f0343694ff099d26f408ccb02c199901b58111f3e21cf3de3c23a210d7b6d"
	I0803 23:45:06.762200 1389119 cri.go:89] found id: ""
	I0803 23:45:06.762210 1389119 logs.go:276] 2 containers: [2dfe55d1797db163cd9199214e216cbd925e66788ac3213f4189fd6d49eb137d fb5f0343694ff099d26f408ccb02c199901b58111f3e21cf3de3c23a210d7b6d]
	I0803 23:45:06.762267 1389119 ssh_runner.go:195] Run: which crictl
	I0803 23:45:06.765974 1389119 ssh_runner.go:195] Run: which crictl
	I0803 23:45:06.769653 1389119 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0803 23:45:06.769726 1389119 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0803 23:45:06.807389 1389119 cri.go:89] found id: "5cbe8058c6723a2d2086539e5d8807f73a64893442fe8bb140d579ee4aececc6"
	I0803 23:45:06.807413 1389119 cri.go:89] found id: "17af5d5b36407e3b6155abcf2050b778e7fba88e3816192cc23567d11a94cc3e"
	I0803 23:45:06.807418 1389119 cri.go:89] found id: ""
	I0803 23:45:06.807426 1389119 logs.go:276] 2 containers: [5cbe8058c6723a2d2086539e5d8807f73a64893442fe8bb140d579ee4aececc6 17af5d5b36407e3b6155abcf2050b778e7fba88e3816192cc23567d11a94cc3e]
	I0803 23:45:06.807482 1389119 ssh_runner.go:195] Run: which crictl
	I0803 23:45:06.811022 1389119 ssh_runner.go:195] Run: which crictl
	I0803 23:45:06.814520 1389119 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0803 23:45:06.814593 1389119 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0803 23:45:06.856056 1389119 cri.go:89] found id: "b8f07cb083173520f0df79356cae5f287397885843c428a87247794fa9b0ad02"
	I0803 23:45:06.856130 1389119 cri.go:89] found id: "1fb74e635c11f9e357abcbe302bff95b9c25381dfe33de15354e0eedbc5564f4"
	I0803 23:45:06.856150 1389119 cri.go:89] found id: ""
	I0803 23:45:06.856174 1389119 logs.go:276] 2 containers: [b8f07cb083173520f0df79356cae5f287397885843c428a87247794fa9b0ad02 1fb74e635c11f9e357abcbe302bff95b9c25381dfe33de15354e0eedbc5564f4]
	I0803 23:45:06.856257 1389119 ssh_runner.go:195] Run: which crictl
	I0803 23:45:06.860079 1389119 ssh_runner.go:195] Run: which crictl
	I0803 23:45:06.863499 1389119 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0803 23:45:06.863594 1389119 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0803 23:45:06.905512 1389119 cri.go:89] found id: "9bf21e77a35b82c0eff0d1588e963d3c44c43c2c26455b9d13e89d93c07f4a7a"
	I0803 23:45:06.905534 1389119 cri.go:89] found id: "1cf5b72b69cb46a2f0ebd44d87f295610e0a42f9bfb29e2fb3e863c1645de1b9"
	I0803 23:45:06.905539 1389119 cri.go:89] found id: ""
	I0803 23:45:06.905545 1389119 logs.go:276] 2 containers: [9bf21e77a35b82c0eff0d1588e963d3c44c43c2c26455b9d13e89d93c07f4a7a 1cf5b72b69cb46a2f0ebd44d87f295610e0a42f9bfb29e2fb3e863c1645de1b9]
	I0803 23:45:06.905622 1389119 ssh_runner.go:195] Run: which crictl
	I0803 23:45:06.909250 1389119 ssh_runner.go:195] Run: which crictl
	I0803 23:45:06.912616 1389119 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0803 23:45:06.912745 1389119 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0803 23:45:06.949355 1389119 cri.go:89] found id: "aceae907dc5dd357ef34d25fba3386bee6214298dda34f31bb744dd99aaf02c5"
	I0803 23:45:06.949379 1389119 cri.go:89] found id: "9d8357e9b98e5a01f551902057365a6f43c698ef3ffda88640f432be5ca342dc"
	I0803 23:45:06.949385 1389119 cri.go:89] found id: ""
	I0803 23:45:06.949392 1389119 logs.go:276] 2 containers: [aceae907dc5dd357ef34d25fba3386bee6214298dda34f31bb744dd99aaf02c5 9d8357e9b98e5a01f551902057365a6f43c698ef3ffda88640f432be5ca342dc]
	I0803 23:45:06.949477 1389119 ssh_runner.go:195] Run: which crictl
	I0803 23:45:06.953258 1389119 ssh_runner.go:195] Run: which crictl
	I0803 23:45:06.957005 1389119 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0803 23:45:06.957132 1389119 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0803 23:45:06.994111 1389119 cri.go:89] found id: "decd46f1eb153a03a912739b52c5220cab3bc0a68a41ce2f501a19b1547d030a"
	I0803 23:45:06.994136 1389119 cri.go:89] found id: "e15be75c9111b317b4e56baaf7f93bc458c9080032ce09cbdd0e3fa77a17cb5e"
	I0803 23:45:06.994141 1389119 cri.go:89] found id: ""
	I0803 23:45:06.994148 1389119 logs.go:276] 2 containers: [decd46f1eb153a03a912739b52c5220cab3bc0a68a41ce2f501a19b1547d030a e15be75c9111b317b4e56baaf7f93bc458c9080032ce09cbdd0e3fa77a17cb5e]
	I0803 23:45:06.994205 1389119 ssh_runner.go:195] Run: which crictl
	I0803 23:45:06.998167 1389119 ssh_runner.go:195] Run: which crictl
	I0803 23:45:07.003669 1389119 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0803 23:45:07.003788 1389119 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0803 23:45:07.050511 1389119 cri.go:89] found id: "e8b4079036ee954a62ef0fc54baf2b399cac2f40b9f5d82804b0a0b63287f659"
	I0803 23:45:07.050571 1389119 cri.go:89] found id: "0d22eec037f1c7aab116e4aa78c05368658c4097a76b252c826555e1a25f0d34"
	I0803 23:45:07.050591 1389119 cri.go:89] found id: ""
	I0803 23:45:07.050605 1389119 logs.go:276] 2 containers: [e8b4079036ee954a62ef0fc54baf2b399cac2f40b9f5d82804b0a0b63287f659 0d22eec037f1c7aab116e4aa78c05368658c4097a76b252c826555e1a25f0d34]
	I0803 23:45:07.050661 1389119 ssh_runner.go:195] Run: which crictl
	I0803 23:45:07.054359 1389119 ssh_runner.go:195] Run: which crictl
	I0803 23:45:07.058014 1389119 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0803 23:45:07.058119 1389119 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0803 23:45:07.096089 1389119 cri.go:89] found id: "32cdc2b56b59662ee956a5208730f2f361969d0adbd1439929e7e6c099a03838"
	I0803 23:45:07.096114 1389119 cri.go:89] found id: ""
	I0803 23:45:07.096122 1389119 logs.go:276] 1 containers: [32cdc2b56b59662ee956a5208730f2f361969d0adbd1439929e7e6c099a03838]
	I0803 23:45:07.096176 1389119 ssh_runner.go:195] Run: which crictl
	I0803 23:45:07.099595 1389119 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0803 23:45:07.099711 1389119 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0803 23:45:07.162961 1389119 cri.go:89] found id: "fe656919428c126fa2c0d30d5ab782825c967bc723dfa769843ecf5bba788898"
	I0803 23:45:07.163028 1389119 cri.go:89] found id: "9d49e481b2b6243d5e9de51f4fcc1d6d8cb677348b5fd9254200807f07c7297e"
	I0803 23:45:07.163047 1389119 cri.go:89] found id: ""
	I0803 23:45:07.163070 1389119 logs.go:276] 2 containers: [fe656919428c126fa2c0d30d5ab782825c967bc723dfa769843ecf5bba788898 9d49e481b2b6243d5e9de51f4fcc1d6d8cb677348b5fd9254200807f07c7297e]
	I0803 23:45:07.163166 1389119 ssh_runner.go:195] Run: which crictl
	I0803 23:45:07.166813 1389119 ssh_runner.go:195] Run: which crictl
	I0803 23:45:07.170591 1389119 logs.go:123] Gathering logs for kubelet ...
	I0803 23:45:07.170624 1389119 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0803 23:45:07.229954 1389119 logs.go:138] Found kubelet problem: Aug 03 23:39:40 old-k8s-version-820414 kubelet[663]: E0803 23:39:40.459162     663 reflector.go:138] object-"kube-system"/"coredns-token-9d2xv": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "coredns-token-9d2xv" is forbidden: User "system:node:old-k8s-version-820414" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-820414' and this object
	W0803 23:45:07.230178 1389119 logs.go:138] Found kubelet problem: Aug 03 23:39:40 old-k8s-version-820414 kubelet[663]: E0803 23:39:40.459309     663 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:old-k8s-version-820414" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-820414' and this object
	W0803 23:45:07.230387 1389119 logs.go:138] Found kubelet problem: Aug 03 23:39:40 old-k8s-version-820414 kubelet[663]: E0803 23:39:40.459518     663 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:old-k8s-version-820414" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-820414' and this object
	W0803 23:45:07.230605 1389119 logs.go:138] Found kubelet problem: Aug 03 23:39:40 old-k8s-version-820414 kubelet[663]: E0803 23:39:40.459607     663 reflector.go:138] object-"kube-system"/"kube-proxy-token-mfhhp": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kube-proxy-token-mfhhp" is forbidden: User "system:node:old-k8s-version-820414" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-820414' and this object
	W0803 23:45:07.230820 1389119 logs.go:138] Found kubelet problem: Aug 03 23:39:40 old-k8s-version-820414 kubelet[663]: E0803 23:39:40.459825     663 reflector.go:138] object-"kube-system"/"kindnet-token-ghz8l": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kindnet-token-ghz8l" is forbidden: User "system:node:old-k8s-version-820414" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-820414' and this object
	W0803 23:45:07.231056 1389119 logs.go:138] Found kubelet problem: Aug 03 23:39:40 old-k8s-version-820414 kubelet[663]: E0803 23:39:40.470624     663 reflector.go:138] object-"kube-system"/"storage-provisioner-token-xnstr": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "storage-provisioner-token-xnstr" is forbidden: User "system:node:old-k8s-version-820414" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-820414' and this object
	W0803 23:45:07.231267 1389119 logs.go:138] Found kubelet problem: Aug 03 23:39:40 old-k8s-version-820414 kubelet[663]: E0803 23:39:40.470708     663 reflector.go:138] object-"default"/"default-token-2m78r": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "default-token-2m78r" is forbidden: User "system:node:old-k8s-version-820414" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'old-k8s-version-820414' and this object
	W0803 23:45:07.231489 1389119 logs.go:138] Found kubelet problem: Aug 03 23:39:40 old-k8s-version-820414 kubelet[663]: E0803 23:39:40.470783     663 reflector.go:138] object-"kube-system"/"metrics-server-token-nxdwh": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "metrics-server-token-nxdwh" is forbidden: User "system:node:old-k8s-version-820414" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-820414' and this object
	W0803 23:45:07.239211 1389119 logs.go:138] Found kubelet problem: Aug 03 23:39:42 old-k8s-version-820414 kubelet[663]: E0803 23:39:42.798080     663 pod_workers.go:191] Error syncing pod fb610adc-a764-488d-83dd-61cc38a45f5f ("metrics-server-9975d5f86-wm57q_kube-system(fb610adc-a764-488d-83dd-61cc38a45f5f)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0803 23:45:07.239402 1389119 logs.go:138] Found kubelet problem: Aug 03 23:39:42 old-k8s-version-820414 kubelet[663]: E0803 23:39:42.847187     663 pod_workers.go:191] Error syncing pod fb610adc-a764-488d-83dd-61cc38a45f5f ("metrics-server-9975d5f86-wm57q_kube-system(fb610adc-a764-488d-83dd-61cc38a45f5f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0803 23:45:07.242987 1389119 logs.go:138] Found kubelet problem: Aug 03 23:39:55 old-k8s-version-820414 kubelet[663]: E0803 23:39:55.456344     663 pod_workers.go:191] Error syncing pod fb610adc-a764-488d-83dd-61cc38a45f5f ("metrics-server-9975d5f86-wm57q_kube-system(fb610adc-a764-488d-83dd-61cc38a45f5f)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0803 23:45:07.244679 1389119 logs.go:138] Found kubelet problem: Aug 03 23:40:09 old-k8s-version-820414 kubelet[663]: E0803 23:40:09.485277     663 pod_workers.go:191] Error syncing pod fb610adc-a764-488d-83dd-61cc38a45f5f ("metrics-server-9975d5f86-wm57q_kube-system(fb610adc-a764-488d-83dd-61cc38a45f5f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0803 23:45:07.245677 1389119 logs.go:138] Found kubelet problem: Aug 03 23:40:12 old-k8s-version-820414 kubelet[663]: E0803 23:40:12.189775     663 pod_workers.go:191] Error syncing pod ffd039c1-683a-48d6-aaaa-75fd4714221a ("dashboard-metrics-scraper-8d5bb5db8-lfzsx_kubernetes-dashboard(ffd039c1-683a-48d6-aaaa-75fd4714221a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-lfzsx_kubernetes-dashboard(ffd039c1-683a-48d6-aaaa-75fd4714221a)"
	W0803 23:45:07.246011 1389119 logs.go:138] Found kubelet problem: Aug 03 23:40:13 old-k8s-version-820414 kubelet[663]: E0803 23:40:13.184996     663 pod_workers.go:191] Error syncing pod ffd039c1-683a-48d6-aaaa-75fd4714221a ("dashboard-metrics-scraper-8d5bb5db8-lfzsx_kubernetes-dashboard(ffd039c1-683a-48d6-aaaa-75fd4714221a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-lfzsx_kubernetes-dashboard(ffd039c1-683a-48d6-aaaa-75fd4714221a)"
	W0803 23:45:07.246477 1389119 logs.go:138] Found kubelet problem: Aug 03 23:40:15 old-k8s-version-820414 kubelet[663]: E0803 23:40:15.194567     663 pod_workers.go:191] Error syncing pod 760afa3c-130b-47d5-a942-ae27ff7ac5f5 ("storage-provisioner_kube-system(760afa3c-130b-47d5-a942-ae27ff7ac5f5)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(760afa3c-130b-47d5-a942-ae27ff7ac5f5)"
	W0803 23:45:07.246820 1389119 logs.go:138] Found kubelet problem: Aug 03 23:40:16 old-k8s-version-820414 kubelet[663]: E0803 23:40:16.619029     663 pod_workers.go:191] Error syncing pod ffd039c1-683a-48d6-aaaa-75fd4714221a ("dashboard-metrics-scraper-8d5bb5db8-lfzsx_kubernetes-dashboard(ffd039c1-683a-48d6-aaaa-75fd4714221a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-lfzsx_kubernetes-dashboard(ffd039c1-683a-48d6-aaaa-75fd4714221a)"
	W0803 23:45:07.249665 1389119 logs.go:138] Found kubelet problem: Aug 03 23:40:23 old-k8s-version-820414 kubelet[663]: E0803 23:40:23.484273     663 pod_workers.go:191] Error syncing pod fb610adc-a764-488d-83dd-61cc38a45f5f ("metrics-server-9975d5f86-wm57q_kube-system(fb610adc-a764-488d-83dd-61cc38a45f5f)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0803 23:45:07.250393 1389119 logs.go:138] Found kubelet problem: Aug 03 23:40:30 old-k8s-version-820414 kubelet[663]: E0803 23:40:30.262767     663 pod_workers.go:191] Error syncing pod ffd039c1-683a-48d6-aaaa-75fd4714221a ("dashboard-metrics-scraper-8d5bb5db8-lfzsx_kubernetes-dashboard(ffd039c1-683a-48d6-aaaa-75fd4714221a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-lfzsx_kubernetes-dashboard(ffd039c1-683a-48d6-aaaa-75fd4714221a)"
	W0803 23:45:07.250579 1389119 logs.go:138] Found kubelet problem: Aug 03 23:40:36 old-k8s-version-820414 kubelet[663]: E0803 23:40:36.524198     663 pod_workers.go:191] Error syncing pod fb610adc-a764-488d-83dd-61cc38a45f5f ("metrics-server-9975d5f86-wm57q_kube-system(fb610adc-a764-488d-83dd-61cc38a45f5f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0803 23:45:07.250908 1389119 logs.go:138] Found kubelet problem: Aug 03 23:40:36 old-k8s-version-820414 kubelet[663]: E0803 23:40:36.615532     663 pod_workers.go:191] Error syncing pod ffd039c1-683a-48d6-aaaa-75fd4714221a ("dashboard-metrics-scraper-8d5bb5db8-lfzsx_kubernetes-dashboard(ffd039c1-683a-48d6-aaaa-75fd4714221a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-lfzsx_kubernetes-dashboard(ffd039c1-683a-48d6-aaaa-75fd4714221a)"
	W0803 23:45:07.251237 1389119 logs.go:138] Found kubelet problem: Aug 03 23:40:48 old-k8s-version-820414 kubelet[663]: E0803 23:40:48.444598     663 pod_workers.go:191] Error syncing pod ffd039c1-683a-48d6-aaaa-75fd4714221a ("dashboard-metrics-scraper-8d5bb5db8-lfzsx_kubernetes-dashboard(ffd039c1-683a-48d6-aaaa-75fd4714221a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-lfzsx_kubernetes-dashboard(ffd039c1-683a-48d6-aaaa-75fd4714221a)"
	W0803 23:45:07.251425 1389119 logs.go:138] Found kubelet problem: Aug 03 23:40:49 old-k8s-version-820414 kubelet[663]: E0803 23:40:49.445255     663 pod_workers.go:191] Error syncing pod fb610adc-a764-488d-83dd-61cc38a45f5f ("metrics-server-9975d5f86-wm57q_kube-system(fb610adc-a764-488d-83dd-61cc38a45f5f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0803 23:45:07.251622 1389119 logs.go:138] Found kubelet problem: Aug 03 23:41:03 old-k8s-version-820414 kubelet[663]: E0803 23:41:03.444496     663 pod_workers.go:191] Error syncing pod fb610adc-a764-488d-83dd-61cc38a45f5f ("metrics-server-9975d5f86-wm57q_kube-system(fb610adc-a764-488d-83dd-61cc38a45f5f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0803 23:45:07.252220 1389119 logs.go:138] Found kubelet problem: Aug 03 23:41:04 old-k8s-version-820414 kubelet[663]: E0803 23:41:04.368191     663 pod_workers.go:191] Error syncing pod ffd039c1-683a-48d6-aaaa-75fd4714221a ("dashboard-metrics-scraper-8d5bb5db8-lfzsx_kubernetes-dashboard(ffd039c1-683a-48d6-aaaa-75fd4714221a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-lfzsx_kubernetes-dashboard(ffd039c1-683a-48d6-aaaa-75fd4714221a)"
	W0803 23:45:07.252575 1389119 logs.go:138] Found kubelet problem: Aug 03 23:41:06 old-k8s-version-820414 kubelet[663]: E0803 23:41:06.609272     663 pod_workers.go:191] Error syncing pod ffd039c1-683a-48d6-aaaa-75fd4714221a ("dashboard-metrics-scraper-8d5bb5db8-lfzsx_kubernetes-dashboard(ffd039c1-683a-48d6-aaaa-75fd4714221a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-lfzsx_kubernetes-dashboard(ffd039c1-683a-48d6-aaaa-75fd4714221a)"
	W0803 23:45:07.255047 1389119 logs.go:138] Found kubelet problem: Aug 03 23:41:15 old-k8s-version-820414 kubelet[663]: E0803 23:41:15.481112     663 pod_workers.go:191] Error syncing pod fb610adc-a764-488d-83dd-61cc38a45f5f ("metrics-server-9975d5f86-wm57q_kube-system(fb610adc-a764-488d-83dd-61cc38a45f5f)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0803 23:45:07.255380 1389119 logs.go:138] Found kubelet problem: Aug 03 23:41:20 old-k8s-version-820414 kubelet[663]: E0803 23:41:20.444068     663 pod_workers.go:191] Error syncing pod ffd039c1-683a-48d6-aaaa-75fd4714221a ("dashboard-metrics-scraper-8d5bb5db8-lfzsx_kubernetes-dashboard(ffd039c1-683a-48d6-aaaa-75fd4714221a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-lfzsx_kubernetes-dashboard(ffd039c1-683a-48d6-aaaa-75fd4714221a)"
	W0803 23:45:07.255574 1389119 logs.go:138] Found kubelet problem: Aug 03 23:41:26 old-k8s-version-820414 kubelet[663]: E0803 23:41:26.444477     663 pod_workers.go:191] Error syncing pod fb610adc-a764-488d-83dd-61cc38a45f5f ("metrics-server-9975d5f86-wm57q_kube-system(fb610adc-a764-488d-83dd-61cc38a45f5f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0803 23:45:07.255904 1389119 logs.go:138] Found kubelet problem: Aug 03 23:41:35 old-k8s-version-820414 kubelet[663]: E0803 23:41:35.444186     663 pod_workers.go:191] Error syncing pod ffd039c1-683a-48d6-aaaa-75fd4714221a ("dashboard-metrics-scraper-8d5bb5db8-lfzsx_kubernetes-dashboard(ffd039c1-683a-48d6-aaaa-75fd4714221a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-lfzsx_kubernetes-dashboard(ffd039c1-683a-48d6-aaaa-75fd4714221a)"
	W0803 23:45:07.256112 1389119 logs.go:138] Found kubelet problem: Aug 03 23:41:38 old-k8s-version-820414 kubelet[663]: E0803 23:41:38.444525     663 pod_workers.go:191] Error syncing pod fb610adc-a764-488d-83dd-61cc38a45f5f ("metrics-server-9975d5f86-wm57q_kube-system(fb610adc-a764-488d-83dd-61cc38a45f5f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0803 23:45:07.256714 1389119 logs.go:138] Found kubelet problem: Aug 03 23:41:47 old-k8s-version-820414 kubelet[663]: E0803 23:41:47.552369     663 pod_workers.go:191] Error syncing pod ffd039c1-683a-48d6-aaaa-75fd4714221a ("dashboard-metrics-scraper-8d5bb5db8-lfzsx_kubernetes-dashboard(ffd039c1-683a-48d6-aaaa-75fd4714221a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-lfzsx_kubernetes-dashboard(ffd039c1-683a-48d6-aaaa-75fd4714221a)"
	W0803 23:45:07.256913 1389119 logs.go:138] Found kubelet problem: Aug 03 23:41:52 old-k8s-version-820414 kubelet[663]: E0803 23:41:52.444351     663 pod_workers.go:191] Error syncing pod fb610adc-a764-488d-83dd-61cc38a45f5f ("metrics-server-9975d5f86-wm57q_kube-system(fb610adc-a764-488d-83dd-61cc38a45f5f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0803 23:45:07.257247 1389119 logs.go:138] Found kubelet problem: Aug 03 23:41:56 old-k8s-version-820414 kubelet[663]: E0803 23:41:56.604520     663 pod_workers.go:191] Error syncing pod ffd039c1-683a-48d6-aaaa-75fd4714221a ("dashboard-metrics-scraper-8d5bb5db8-lfzsx_kubernetes-dashboard(ffd039c1-683a-48d6-aaaa-75fd4714221a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-lfzsx_kubernetes-dashboard(ffd039c1-683a-48d6-aaaa-75fd4714221a)"
	W0803 23:45:07.257437 1389119 logs.go:138] Found kubelet problem: Aug 03 23:42:07 old-k8s-version-820414 kubelet[663]: E0803 23:42:07.444300     663 pod_workers.go:191] Error syncing pod fb610adc-a764-488d-83dd-61cc38a45f5f ("metrics-server-9975d5f86-wm57q_kube-system(fb610adc-a764-488d-83dd-61cc38a45f5f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0803 23:45:07.257772 1389119 logs.go:138] Found kubelet problem: Aug 03 23:42:11 old-k8s-version-820414 kubelet[663]: E0803 23:42:11.444597     663 pod_workers.go:191] Error syncing pod ffd039c1-683a-48d6-aaaa-75fd4714221a ("dashboard-metrics-scraper-8d5bb5db8-lfzsx_kubernetes-dashboard(ffd039c1-683a-48d6-aaaa-75fd4714221a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-lfzsx_kubernetes-dashboard(ffd039c1-683a-48d6-aaaa-75fd4714221a)"
	W0803 23:45:07.258088 1389119 logs.go:138] Found kubelet problem: Aug 03 23:42:22 old-k8s-version-820414 kubelet[663]: E0803 23:42:22.444598     663 pod_workers.go:191] Error syncing pod fb610adc-a764-488d-83dd-61cc38a45f5f ("metrics-server-9975d5f86-wm57q_kube-system(fb610adc-a764-488d-83dd-61cc38a45f5f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0803 23:45:07.258287 1389119 logs.go:138] Found kubelet problem: Aug 03 23:42:22 old-k8s-version-820414 kubelet[663]: E0803 23:42:22.444840     663 pod_workers.go:191] Error syncing pod ffd039c1-683a-48d6-aaaa-75fd4714221a ("dashboard-metrics-scraper-8d5bb5db8-lfzsx_kubernetes-dashboard(ffd039c1-683a-48d6-aaaa-75fd4714221a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-lfzsx_kubernetes-dashboard(ffd039c1-683a-48d6-aaaa-75fd4714221a)"
	W0803 23:45:07.258626 1389119 logs.go:138] Found kubelet problem: Aug 03 23:42:34 old-k8s-version-820414 kubelet[663]: E0803 23:42:34.444102     663 pod_workers.go:191] Error syncing pod ffd039c1-683a-48d6-aaaa-75fd4714221a ("dashboard-metrics-scraper-8d5bb5db8-lfzsx_kubernetes-dashboard(ffd039c1-683a-48d6-aaaa-75fd4714221a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-lfzsx_kubernetes-dashboard(ffd039c1-683a-48d6-aaaa-75fd4714221a)"
	W0803 23:45:07.261093 1389119 logs.go:138] Found kubelet problem: Aug 03 23:42:37 old-k8s-version-820414 kubelet[663]: E0803 23:42:37.453580     663 pod_workers.go:191] Error syncing pod fb610adc-a764-488d-83dd-61cc38a45f5f ("metrics-server-9975d5f86-wm57q_kube-system(fb610adc-a764-488d-83dd-61cc38a45f5f)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0803 23:45:07.261430 1389119 logs.go:138] Found kubelet problem: Aug 03 23:42:49 old-k8s-version-820414 kubelet[663]: E0803 23:42:49.444596     663 pod_workers.go:191] Error syncing pod ffd039c1-683a-48d6-aaaa-75fd4714221a ("dashboard-metrics-scraper-8d5bb5db8-lfzsx_kubernetes-dashboard(ffd039c1-683a-48d6-aaaa-75fd4714221a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-lfzsx_kubernetes-dashboard(ffd039c1-683a-48d6-aaaa-75fd4714221a)"
	W0803 23:45:07.261616 1389119 logs.go:138] Found kubelet problem: Aug 03 23:42:51 old-k8s-version-820414 kubelet[663]: E0803 23:42:51.445065     663 pod_workers.go:191] Error syncing pod fb610adc-a764-488d-83dd-61cc38a45f5f ("metrics-server-9975d5f86-wm57q_kube-system(fb610adc-a764-488d-83dd-61cc38a45f5f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0803 23:45:07.261945 1389119 logs.go:138] Found kubelet problem: Aug 03 23:43:02 old-k8s-version-820414 kubelet[663]: E0803 23:43:02.444071     663 pod_workers.go:191] Error syncing pod ffd039c1-683a-48d6-aaaa-75fd4714221a ("dashboard-metrics-scraper-8d5bb5db8-lfzsx_kubernetes-dashboard(ffd039c1-683a-48d6-aaaa-75fd4714221a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-lfzsx_kubernetes-dashboard(ffd039c1-683a-48d6-aaaa-75fd4714221a)"
	W0803 23:45:07.262132 1389119 logs.go:138] Found kubelet problem: Aug 03 23:43:05 old-k8s-version-820414 kubelet[663]: E0803 23:43:05.444843     663 pod_workers.go:191] Error syncing pod fb610adc-a764-488d-83dd-61cc38a45f5f ("metrics-server-9975d5f86-wm57q_kube-system(fb610adc-a764-488d-83dd-61cc38a45f5f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0803 23:45:07.262727 1389119 logs.go:138] Found kubelet problem: Aug 03 23:43:16 old-k8s-version-820414 kubelet[663]: E0803 23:43:16.774818     663 pod_workers.go:191] Error syncing pod ffd039c1-683a-48d6-aaaa-75fd4714221a ("dashboard-metrics-scraper-8d5bb5db8-lfzsx_kubernetes-dashboard(ffd039c1-683a-48d6-aaaa-75fd4714221a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-lfzsx_kubernetes-dashboard(ffd039c1-683a-48d6-aaaa-75fd4714221a)"
	W0803 23:45:07.262915 1389119 logs.go:138] Found kubelet problem: Aug 03 23:43:18 old-k8s-version-820414 kubelet[663]: E0803 23:43:18.444453     663 pod_workers.go:191] Error syncing pod fb610adc-a764-488d-83dd-61cc38a45f5f ("metrics-server-9975d5f86-wm57q_kube-system(fb610adc-a764-488d-83dd-61cc38a45f5f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0803 23:45:07.263246 1389119 logs.go:138] Found kubelet problem: Aug 03 23:43:26 old-k8s-version-820414 kubelet[663]: E0803 23:43:26.604545     663 pod_workers.go:191] Error syncing pod ffd039c1-683a-48d6-aaaa-75fd4714221a ("dashboard-metrics-scraper-8d5bb5db8-lfzsx_kubernetes-dashboard(ffd039c1-683a-48d6-aaaa-75fd4714221a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-lfzsx_kubernetes-dashboard(ffd039c1-683a-48d6-aaaa-75fd4714221a)"
	W0803 23:45:07.263433 1389119 logs.go:138] Found kubelet problem: Aug 03 23:43:29 old-k8s-version-820414 kubelet[663]: E0803 23:43:29.445577     663 pod_workers.go:191] Error syncing pod fb610adc-a764-488d-83dd-61cc38a45f5f ("metrics-server-9975d5f86-wm57q_kube-system(fb610adc-a764-488d-83dd-61cc38a45f5f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0803 23:45:07.263764 1389119 logs.go:138] Found kubelet problem: Aug 03 23:43:41 old-k8s-version-820414 kubelet[663]: E0803 23:43:41.444779     663 pod_workers.go:191] Error syncing pod ffd039c1-683a-48d6-aaaa-75fd4714221a ("dashboard-metrics-scraper-8d5bb5db8-lfzsx_kubernetes-dashboard(ffd039c1-683a-48d6-aaaa-75fd4714221a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-lfzsx_kubernetes-dashboard(ffd039c1-683a-48d6-aaaa-75fd4714221a)"
	W0803 23:45:07.263951 1389119 logs.go:138] Found kubelet problem: Aug 03 23:43:44 old-k8s-version-820414 kubelet[663]: E0803 23:43:44.444419     663 pod_workers.go:191] Error syncing pod fb610adc-a764-488d-83dd-61cc38a45f5f ("metrics-server-9975d5f86-wm57q_kube-system(fb610adc-a764-488d-83dd-61cc38a45f5f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0803 23:45:07.264281 1389119 logs.go:138] Found kubelet problem: Aug 03 23:43:56 old-k8s-version-820414 kubelet[663]: E0803 23:43:56.444259     663 pod_workers.go:191] Error syncing pod ffd039c1-683a-48d6-aaaa-75fd4714221a ("dashboard-metrics-scraper-8d5bb5db8-lfzsx_kubernetes-dashboard(ffd039c1-683a-48d6-aaaa-75fd4714221a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-lfzsx_kubernetes-dashboard(ffd039c1-683a-48d6-aaaa-75fd4714221a)"
	W0803 23:45:07.264467 1389119 logs.go:138] Found kubelet problem: Aug 03 23:43:58 old-k8s-version-820414 kubelet[663]: E0803 23:43:58.444353     663 pod_workers.go:191] Error syncing pod fb610adc-a764-488d-83dd-61cc38a45f5f ("metrics-server-9975d5f86-wm57q_kube-system(fb610adc-a764-488d-83dd-61cc38a45f5f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0803 23:45:07.264806 1389119 logs.go:138] Found kubelet problem: Aug 03 23:44:08 old-k8s-version-820414 kubelet[663]: E0803 23:44:08.444136     663 pod_workers.go:191] Error syncing pod ffd039c1-683a-48d6-aaaa-75fd4714221a ("dashboard-metrics-scraper-8d5bb5db8-lfzsx_kubernetes-dashboard(ffd039c1-683a-48d6-aaaa-75fd4714221a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-lfzsx_kubernetes-dashboard(ffd039c1-683a-48d6-aaaa-75fd4714221a)"
	W0803 23:45:07.264996 1389119 logs.go:138] Found kubelet problem: Aug 03 23:44:12 old-k8s-version-820414 kubelet[663]: E0803 23:44:12.444406     663 pod_workers.go:191] Error syncing pod fb610adc-a764-488d-83dd-61cc38a45f5f ("metrics-server-9975d5f86-wm57q_kube-system(fb610adc-a764-488d-83dd-61cc38a45f5f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0803 23:45:07.265328 1389119 logs.go:138] Found kubelet problem: Aug 03 23:44:22 old-k8s-version-820414 kubelet[663]: E0803 23:44:22.444030     663 pod_workers.go:191] Error syncing pod ffd039c1-683a-48d6-aaaa-75fd4714221a ("dashboard-metrics-scraper-8d5bb5db8-lfzsx_kubernetes-dashboard(ffd039c1-683a-48d6-aaaa-75fd4714221a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-lfzsx_kubernetes-dashboard(ffd039c1-683a-48d6-aaaa-75fd4714221a)"
	W0803 23:45:07.265513 1389119 logs.go:138] Found kubelet problem: Aug 03 23:44:23 old-k8s-version-820414 kubelet[663]: E0803 23:44:23.449169     663 pod_workers.go:191] Error syncing pod fb610adc-a764-488d-83dd-61cc38a45f5f ("metrics-server-9975d5f86-wm57q_kube-system(fb610adc-a764-488d-83dd-61cc38a45f5f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0803 23:45:07.265698 1389119 logs.go:138] Found kubelet problem: Aug 03 23:44:35 old-k8s-version-820414 kubelet[663]: E0803 23:44:35.445923     663 pod_workers.go:191] Error syncing pod fb610adc-a764-488d-83dd-61cc38a45f5f ("metrics-server-9975d5f86-wm57q_kube-system(fb610adc-a764-488d-83dd-61cc38a45f5f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0803 23:45:07.266028 1389119 logs.go:138] Found kubelet problem: Aug 03 23:44:37 old-k8s-version-820414 kubelet[663]: E0803 23:44:37.444223     663 pod_workers.go:191] Error syncing pod ffd039c1-683a-48d6-aaaa-75fd4714221a ("dashboard-metrics-scraper-8d5bb5db8-lfzsx_kubernetes-dashboard(ffd039c1-683a-48d6-aaaa-75fd4714221a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-lfzsx_kubernetes-dashboard(ffd039c1-683a-48d6-aaaa-75fd4714221a)"
	W0803 23:45:07.266363 1389119 logs.go:138] Found kubelet problem: Aug 03 23:44:50 old-k8s-version-820414 kubelet[663]: E0803 23:44:50.444155     663 pod_workers.go:191] Error syncing pod ffd039c1-683a-48d6-aaaa-75fd4714221a ("dashboard-metrics-scraper-8d5bb5db8-lfzsx_kubernetes-dashboard(ffd039c1-683a-48d6-aaaa-75fd4714221a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-lfzsx_kubernetes-dashboard(ffd039c1-683a-48d6-aaaa-75fd4714221a)"
	W0803 23:45:07.266550 1389119 logs.go:138] Found kubelet problem: Aug 03 23:44:50 old-k8s-version-820414 kubelet[663]: E0803 23:44:50.445023     663 pod_workers.go:191] Error syncing pod fb610adc-a764-488d-83dd-61cc38a45f5f ("metrics-server-9975d5f86-wm57q_kube-system(fb610adc-a764-488d-83dd-61cc38a45f5f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0803 23:45:07.266737 1389119 logs.go:138] Found kubelet problem: Aug 03 23:45:02 old-k8s-version-820414 kubelet[663]: E0803 23:45:02.444334     663 pod_workers.go:191] Error syncing pod fb610adc-a764-488d-83dd-61cc38a45f5f ("metrics-server-9975d5f86-wm57q_kube-system(fb610adc-a764-488d-83dd-61cc38a45f5f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0803 23:45:07.267072 1389119 logs.go:138] Found kubelet problem: Aug 03 23:45:04 old-k8s-version-820414 kubelet[663]: E0803 23:45:04.444530     663 pod_workers.go:191] Error syncing pod ffd039c1-683a-48d6-aaaa-75fd4714221a ("dashboard-metrics-scraper-8d5bb5db8-lfzsx_kubernetes-dashboard(ffd039c1-683a-48d6-aaaa-75fd4714221a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-lfzsx_kubernetes-dashboard(ffd039c1-683a-48d6-aaaa-75fd4714221a)"
	I0803 23:45:07.267087 1389119 logs.go:123] Gathering logs for dmesg ...
	I0803 23:45:07.267104 1389119 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0803 23:45:07.288852 1389119 logs.go:123] Gathering logs for kube-apiserver [fb5f0343694ff099d26f408ccb02c199901b58111f3e21cf3de3c23a210d7b6d] ...
	I0803 23:45:07.288927 1389119 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fb5f0343694ff099d26f408ccb02c199901b58111f3e21cf3de3c23a210d7b6d"
	I0803 23:45:07.347909 1389119 logs.go:123] Gathering logs for etcd [5cbe8058c6723a2d2086539e5d8807f73a64893442fe8bb140d579ee4aececc6] ...
	I0803 23:45:07.347941 1389119 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5cbe8058c6723a2d2086539e5d8807f73a64893442fe8bb140d579ee4aececc6"
	I0803 23:45:07.391806 1389119 logs.go:123] Gathering logs for kube-proxy [9d8357e9b98e5a01f551902057365a6f43c698ef3ffda88640f432be5ca342dc] ...
	I0803 23:45:07.391836 1389119 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9d8357e9b98e5a01f551902057365a6f43c698ef3ffda88640f432be5ca342dc"
	I0803 23:45:07.437412 1389119 logs.go:123] Gathering logs for kube-controller-manager [decd46f1eb153a03a912739b52c5220cab3bc0a68a41ce2f501a19b1547d030a] ...
	I0803 23:45:07.437441 1389119 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 decd46f1eb153a03a912739b52c5220cab3bc0a68a41ce2f501a19b1547d030a"
	I0803 23:45:07.512477 1389119 logs.go:123] Gathering logs for kindnet [e8b4079036ee954a62ef0fc54baf2b399cac2f40b9f5d82804b0a0b63287f659] ...
	I0803 23:45:07.512513 1389119 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e8b4079036ee954a62ef0fc54baf2b399cac2f40b9f5d82804b0a0b63287f659"
	I0803 23:45:07.574142 1389119 logs.go:123] Gathering logs for kindnet [0d22eec037f1c7aab116e4aa78c05368658c4097a76b252c826555e1a25f0d34] ...
	I0803 23:45:07.574177 1389119 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0d22eec037f1c7aab116e4aa78c05368658c4097a76b252c826555e1a25f0d34"
	I0803 23:45:07.636991 1389119 logs.go:123] Gathering logs for kubernetes-dashboard [32cdc2b56b59662ee956a5208730f2f361969d0adbd1439929e7e6c099a03838] ...
	I0803 23:45:07.637025 1389119 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 32cdc2b56b59662ee956a5208730f2f361969d0adbd1439929e7e6c099a03838"
	I0803 23:45:07.679758 1389119 logs.go:123] Gathering logs for storage-provisioner [9d49e481b2b6243d5e9de51f4fcc1d6d8cb677348b5fd9254200807f07c7297e] ...
	I0803 23:45:07.679829 1389119 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9d49e481b2b6243d5e9de51f4fcc1d6d8cb677348b5fd9254200807f07c7297e"
	I0803 23:45:07.729314 1389119 logs.go:123] Gathering logs for containerd ...
	I0803 23:45:07.729344 1389119 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0803 23:45:07.793803 1389119 logs.go:123] Gathering logs for describe nodes ...
	I0803 23:45:07.793843 1389119 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0803 23:45:07.975479 1389119 logs.go:123] Gathering logs for coredns [1fb74e635c11f9e357abcbe302bff95b9c25381dfe33de15354e0eedbc5564f4] ...
	I0803 23:45:07.975517 1389119 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1fb74e635c11f9e357abcbe302bff95b9c25381dfe33de15354e0eedbc5564f4"
	I0803 23:45:08.018369 1389119 logs.go:123] Gathering logs for kube-scheduler [1cf5b72b69cb46a2f0ebd44d87f295610e0a42f9bfb29e2fb3e863c1645de1b9] ...
	I0803 23:45:08.018398 1389119 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1cf5b72b69cb46a2f0ebd44d87f295610e0a42f9bfb29e2fb3e863c1645de1b9"
	I0803 23:45:08.071822 1389119 logs.go:123] Gathering logs for kube-proxy [aceae907dc5dd357ef34d25fba3386bee6214298dda34f31bb744dd99aaf02c5] ...
	I0803 23:45:08.071855 1389119 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 aceae907dc5dd357ef34d25fba3386bee6214298dda34f31bb744dd99aaf02c5"
	I0803 23:45:08.120580 1389119 logs.go:123] Gathering logs for etcd [17af5d5b36407e3b6155abcf2050b778e7fba88e3816192cc23567d11a94cc3e] ...
	I0803 23:45:08.120607 1389119 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 17af5d5b36407e3b6155abcf2050b778e7fba88e3816192cc23567d11a94cc3e"
	I0803 23:45:08.165131 1389119 logs.go:123] Gathering logs for coredns [b8f07cb083173520f0df79356cae5f287397885843c428a87247794fa9b0ad02] ...
	I0803 23:45:08.165163 1389119 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b8f07cb083173520f0df79356cae5f287397885843c428a87247794fa9b0ad02"
	I0803 23:45:08.204082 1389119 logs.go:123] Gathering logs for kube-controller-manager [e15be75c9111b317b4e56baaf7f93bc458c9080032ce09cbdd0e3fa77a17cb5e] ...
	I0803 23:45:08.204111 1389119 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e15be75c9111b317b4e56baaf7f93bc458c9080032ce09cbdd0e3fa77a17cb5e"
	I0803 23:45:08.278449 1389119 logs.go:123] Gathering logs for kube-apiserver [2dfe55d1797db163cd9199214e216cbd925e66788ac3213f4189fd6d49eb137d] ...
	I0803 23:45:08.278489 1389119 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2dfe55d1797db163cd9199214e216cbd925e66788ac3213f4189fd6d49eb137d"
	I0803 23:45:08.346514 1389119 logs.go:123] Gathering logs for kube-scheduler [9bf21e77a35b82c0eff0d1588e963d3c44c43c2c26455b9d13e89d93c07f4a7a] ...
	I0803 23:45:08.346551 1389119 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9bf21e77a35b82c0eff0d1588e963d3c44c43c2c26455b9d13e89d93c07f4a7a"
	I0803 23:45:08.385202 1389119 logs.go:123] Gathering logs for storage-provisioner [fe656919428c126fa2c0d30d5ab782825c967bc723dfa769843ecf5bba788898] ...
	I0803 23:45:08.385237 1389119 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fe656919428c126fa2c0d30d5ab782825c967bc723dfa769843ecf5bba788898"
	I0803 23:45:08.436943 1389119 logs.go:123] Gathering logs for container status ...
	I0803 23:45:08.436973 1389119 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0803 23:45:08.513063 1389119 out.go:304] Setting ErrFile to fd 2...
	I0803 23:45:08.513090 1389119 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0803 23:45:08.513149 1389119 out.go:239] X Problems detected in kubelet:
	W0803 23:45:08.513166 1389119 out.go:239]   Aug 03 23:44:37 old-k8s-version-820414 kubelet[663]: E0803 23:44:37.444223     663 pod_workers.go:191] Error syncing pod ffd039c1-683a-48d6-aaaa-75fd4714221a ("dashboard-metrics-scraper-8d5bb5db8-lfzsx_kubernetes-dashboard(ffd039c1-683a-48d6-aaaa-75fd4714221a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-lfzsx_kubernetes-dashboard(ffd039c1-683a-48d6-aaaa-75fd4714221a)"
	W0803 23:45:08.513182 1389119 out.go:239]   Aug 03 23:44:50 old-k8s-version-820414 kubelet[663]: E0803 23:44:50.444155     663 pod_workers.go:191] Error syncing pod ffd039c1-683a-48d6-aaaa-75fd4714221a ("dashboard-metrics-scraper-8d5bb5db8-lfzsx_kubernetes-dashboard(ffd039c1-683a-48d6-aaaa-75fd4714221a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-lfzsx_kubernetes-dashboard(ffd039c1-683a-48d6-aaaa-75fd4714221a)"
	W0803 23:45:08.513328 1389119 out.go:239]   Aug 03 23:44:50 old-k8s-version-820414 kubelet[663]: E0803 23:44:50.445023     663 pod_workers.go:191] Error syncing pod fb610adc-a764-488d-83dd-61cc38a45f5f ("metrics-server-9975d5f86-wm57q_kube-system(fb610adc-a764-488d-83dd-61cc38a45f5f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0803 23:45:08.513337 1389119 out.go:239]   Aug 03 23:45:02 old-k8s-version-820414 kubelet[663]: E0803 23:45:02.444334     663 pod_workers.go:191] Error syncing pod fb610adc-a764-488d-83dd-61cc38a45f5f ("metrics-server-9975d5f86-wm57q_kube-system(fb610adc-a764-488d-83dd-61cc38a45f5f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0803 23:45:08.513343 1389119 out.go:239]   Aug 03 23:45:04 old-k8s-version-820414 kubelet[663]: E0803 23:45:04.444530     663 pod_workers.go:191] Error syncing pod ffd039c1-683a-48d6-aaaa-75fd4714221a ("dashboard-metrics-scraper-8d5bb5db8-lfzsx_kubernetes-dashboard(ffd039c1-683a-48d6-aaaa-75fd4714221a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-lfzsx_kubernetes-dashboard(ffd039c1-683a-48d6-aaaa-75fd4714221a)"
	I0803 23:45:08.513354 1389119 out.go:304] Setting ErrFile to fd 2...
	I0803 23:45:08.513365 1389119 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 23:45:05.105487 1397775 pod_ready.go:102] pod "metrics-server-6867b74b74-zs9xw" in "kube-system" namespace has status "Ready":"False"
	I0803 23:45:07.105832 1397775 pod_ready.go:102] pod "metrics-server-6867b74b74-zs9xw" in "kube-system" namespace has status "Ready":"False"
	I0803 23:45:09.603969 1397775 pod_ready.go:102] pod "metrics-server-6867b74b74-zs9xw" in "kube-system" namespace has status "Ready":"False"
	I0803 23:45:11.604284 1397775 pod_ready.go:102] pod "metrics-server-6867b74b74-zs9xw" in "kube-system" namespace has status "Ready":"False"
	I0803 23:45:14.105061 1397775 pod_ready.go:102] pod "metrics-server-6867b74b74-zs9xw" in "kube-system" namespace has status "Ready":"False"
	I0803 23:45:18.514287 1389119 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0803 23:45:18.526473 1389119 api_server.go:72] duration metric: took 5m57.099654038s to wait for apiserver process to appear ...
	I0803 23:45:18.526500 1389119 api_server.go:88] waiting for apiserver healthz status ...
	I0803 23:45:18.526535 1389119 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0803 23:45:18.526600 1389119 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0803 23:45:18.567715 1389119 cri.go:89] found id: "2dfe55d1797db163cd9199214e216cbd925e66788ac3213f4189fd6d49eb137d"
	I0803 23:45:18.567739 1389119 cri.go:89] found id: "fb5f0343694ff099d26f408ccb02c199901b58111f3e21cf3de3c23a210d7b6d"
	I0803 23:45:18.567744 1389119 cri.go:89] found id: ""
	I0803 23:45:18.567751 1389119 logs.go:276] 2 containers: [2dfe55d1797db163cd9199214e216cbd925e66788ac3213f4189fd6d49eb137d fb5f0343694ff099d26f408ccb02c199901b58111f3e21cf3de3c23a210d7b6d]
	I0803 23:45:18.567807 1389119 ssh_runner.go:195] Run: which crictl
	I0803 23:45:18.571380 1389119 ssh_runner.go:195] Run: which crictl
	I0803 23:45:18.574952 1389119 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0803 23:45:18.575024 1389119 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0803 23:45:18.615398 1389119 cri.go:89] found id: "5cbe8058c6723a2d2086539e5d8807f73a64893442fe8bb140d579ee4aececc6"
	I0803 23:45:18.615423 1389119 cri.go:89] found id: "17af5d5b36407e3b6155abcf2050b778e7fba88e3816192cc23567d11a94cc3e"
	I0803 23:45:18.615428 1389119 cri.go:89] found id: ""
	I0803 23:45:18.615436 1389119 logs.go:276] 2 containers: [5cbe8058c6723a2d2086539e5d8807f73a64893442fe8bb140d579ee4aececc6 17af5d5b36407e3b6155abcf2050b778e7fba88e3816192cc23567d11a94cc3e]
	I0803 23:45:18.615491 1389119 ssh_runner.go:195] Run: which crictl
	I0803 23:45:18.619044 1389119 ssh_runner.go:195] Run: which crictl
	I0803 23:45:18.623004 1389119 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0803 23:45:18.623101 1389119 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0803 23:45:18.666718 1389119 cri.go:89] found id: "b8f07cb083173520f0df79356cae5f287397885843c428a87247794fa9b0ad02"
	I0803 23:45:18.666739 1389119 cri.go:89] found id: "1fb74e635c11f9e357abcbe302bff95b9c25381dfe33de15354e0eedbc5564f4"
	I0803 23:45:18.666744 1389119 cri.go:89] found id: ""
	I0803 23:45:18.666751 1389119 logs.go:276] 2 containers: [b8f07cb083173520f0df79356cae5f287397885843c428a87247794fa9b0ad02 1fb74e635c11f9e357abcbe302bff95b9c25381dfe33de15354e0eedbc5564f4]
	I0803 23:45:18.666810 1389119 ssh_runner.go:195] Run: which crictl
	I0803 23:45:18.670661 1389119 ssh_runner.go:195] Run: which crictl
	I0803 23:45:18.674310 1389119 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0803 23:45:18.674385 1389119 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0803 23:45:18.715500 1389119 cri.go:89] found id: "9bf21e77a35b82c0eff0d1588e963d3c44c43c2c26455b9d13e89d93c07f4a7a"
	I0803 23:45:18.715541 1389119 cri.go:89] found id: "1cf5b72b69cb46a2f0ebd44d87f295610e0a42f9bfb29e2fb3e863c1645de1b9"
	I0803 23:45:18.715546 1389119 cri.go:89] found id: ""
	I0803 23:45:18.715553 1389119 logs.go:276] 2 containers: [9bf21e77a35b82c0eff0d1588e963d3c44c43c2c26455b9d13e89d93c07f4a7a 1cf5b72b69cb46a2f0ebd44d87f295610e0a42f9bfb29e2fb3e863c1645de1b9]
	I0803 23:45:18.715616 1389119 ssh_runner.go:195] Run: which crictl
	I0803 23:45:18.719414 1389119 ssh_runner.go:195] Run: which crictl
	I0803 23:45:18.723323 1389119 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0803 23:45:18.723424 1389119 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0803 23:45:18.767590 1389119 cri.go:89] found id: "aceae907dc5dd357ef34d25fba3386bee6214298dda34f31bb744dd99aaf02c5"
	I0803 23:45:18.767614 1389119 cri.go:89] found id: "9d8357e9b98e5a01f551902057365a6f43c698ef3ffda88640f432be5ca342dc"
	I0803 23:45:18.767620 1389119 cri.go:89] found id: ""
	I0803 23:45:18.767627 1389119 logs.go:276] 2 containers: [aceae907dc5dd357ef34d25fba3386bee6214298dda34f31bb744dd99aaf02c5 9d8357e9b98e5a01f551902057365a6f43c698ef3ffda88640f432be5ca342dc]
	I0803 23:45:18.767685 1389119 ssh_runner.go:195] Run: which crictl
	I0803 23:45:18.771782 1389119 ssh_runner.go:195] Run: which crictl
	I0803 23:45:18.775255 1389119 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0803 23:45:18.775365 1389119 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0803 23:45:18.812952 1389119 cri.go:89] found id: "decd46f1eb153a03a912739b52c5220cab3bc0a68a41ce2f501a19b1547d030a"
	I0803 23:45:18.812978 1389119 cri.go:89] found id: "e15be75c9111b317b4e56baaf7f93bc458c9080032ce09cbdd0e3fa77a17cb5e"
	I0803 23:45:18.812984 1389119 cri.go:89] found id: ""
	I0803 23:45:18.812991 1389119 logs.go:276] 2 containers: [decd46f1eb153a03a912739b52c5220cab3bc0a68a41ce2f501a19b1547d030a e15be75c9111b317b4e56baaf7f93bc458c9080032ce09cbdd0e3fa77a17cb5e]
	I0803 23:45:18.813050 1389119 ssh_runner.go:195] Run: which crictl
	I0803 23:45:18.817560 1389119 ssh_runner.go:195] Run: which crictl
	I0803 23:45:18.821261 1389119 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0803 23:45:18.821336 1389119 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0803 23:45:18.874734 1389119 cri.go:89] found id: "e8b4079036ee954a62ef0fc54baf2b399cac2f40b9f5d82804b0a0b63287f659"
	I0803 23:45:18.874797 1389119 cri.go:89] found id: "0d22eec037f1c7aab116e4aa78c05368658c4097a76b252c826555e1a25f0d34"
	I0803 23:45:18.874808 1389119 cri.go:89] found id: ""
	I0803 23:45:18.874815 1389119 logs.go:276] 2 containers: [e8b4079036ee954a62ef0fc54baf2b399cac2f40b9f5d82804b0a0b63287f659 0d22eec037f1c7aab116e4aa78c05368658c4097a76b252c826555e1a25f0d34]
	I0803 23:45:18.874878 1389119 ssh_runner.go:195] Run: which crictl
	I0803 23:45:18.878704 1389119 ssh_runner.go:195] Run: which crictl
	I0803 23:45:18.882687 1389119 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0803 23:45:18.882760 1389119 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0803 23:45:18.930472 1389119 cri.go:89] found id: "32cdc2b56b59662ee956a5208730f2f361969d0adbd1439929e7e6c099a03838"
	I0803 23:45:18.930547 1389119 cri.go:89] found id: ""
	I0803 23:45:18.930562 1389119 logs.go:276] 1 containers: [32cdc2b56b59662ee956a5208730f2f361969d0adbd1439929e7e6c099a03838]
	I0803 23:45:18.930627 1389119 ssh_runner.go:195] Run: which crictl
	I0803 23:45:18.934504 1389119 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0803 23:45:18.934584 1389119 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0803 23:45:18.972093 1389119 cri.go:89] found id: "fe656919428c126fa2c0d30d5ab782825c967bc723dfa769843ecf5bba788898"
	I0803 23:45:18.972114 1389119 cri.go:89] found id: "9d49e481b2b6243d5e9de51f4fcc1d6d8cb677348b5fd9254200807f07c7297e"
	I0803 23:45:18.972118 1389119 cri.go:89] found id: ""
	I0803 23:45:18.972126 1389119 logs.go:276] 2 containers: [fe656919428c126fa2c0d30d5ab782825c967bc723dfa769843ecf5bba788898 9d49e481b2b6243d5e9de51f4fcc1d6d8cb677348b5fd9254200807f07c7297e]
	I0803 23:45:18.972181 1389119 ssh_runner.go:195] Run: which crictl
	I0803 23:45:18.975653 1389119 ssh_runner.go:195] Run: which crictl
	I0803 23:45:18.979224 1389119 logs.go:123] Gathering logs for etcd [5cbe8058c6723a2d2086539e5d8807f73a64893442fe8bb140d579ee4aececc6] ...
	I0803 23:45:18.979249 1389119 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5cbe8058c6723a2d2086539e5d8807f73a64893442fe8bb140d579ee4aececc6"
	I0803 23:45:19.024836 1389119 logs.go:123] Gathering logs for kube-controller-manager [decd46f1eb153a03a912739b52c5220cab3bc0a68a41ce2f501a19b1547d030a] ...
	I0803 23:45:19.024865 1389119 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 decd46f1eb153a03a912739b52c5220cab3bc0a68a41ce2f501a19b1547d030a"
	I0803 23:45:19.085464 1389119 logs.go:123] Gathering logs for kube-controller-manager [e15be75c9111b317b4e56baaf7f93bc458c9080032ce09cbdd0e3fa77a17cb5e] ...
	I0803 23:45:19.085496 1389119 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e15be75c9111b317b4e56baaf7f93bc458c9080032ce09cbdd0e3fa77a17cb5e"
	I0803 23:45:16.604659 1397775 pod_ready.go:102] pod "metrics-server-6867b74b74-zs9xw" in "kube-system" namespace has status "Ready":"False"
	I0803 23:45:19.103159 1397775 pod_ready.go:102] pod "metrics-server-6867b74b74-zs9xw" in "kube-system" namespace has status "Ready":"False"
	I0803 23:45:19.152566 1389119 logs.go:123] Gathering logs for storage-provisioner [9d49e481b2b6243d5e9de51f4fcc1d6d8cb677348b5fd9254200807f07c7297e] ...
	I0803 23:45:19.152598 1389119 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9d49e481b2b6243d5e9de51f4fcc1d6d8cb677348b5fd9254200807f07c7297e"
	I0803 23:45:19.196949 1389119 logs.go:123] Gathering logs for containerd ...
	I0803 23:45:19.196976 1389119 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0803 23:45:19.256338 1389119 logs.go:123] Gathering logs for kubelet ...
	I0803 23:45:19.256370 1389119 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0803 23:45:19.312575 1389119 logs.go:138] Found kubelet problem: Aug 03 23:39:40 old-k8s-version-820414 kubelet[663]: E0803 23:39:40.459162     663 reflector.go:138] object-"kube-system"/"coredns-token-9d2xv": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "coredns-token-9d2xv" is forbidden: User "system:node:old-k8s-version-820414" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-820414' and this object
	W0803 23:45:19.312808 1389119 logs.go:138] Found kubelet problem: Aug 03 23:39:40 old-k8s-version-820414 kubelet[663]: E0803 23:39:40.459309     663 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:old-k8s-version-820414" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-820414' and this object
	W0803 23:45:19.313021 1389119 logs.go:138] Found kubelet problem: Aug 03 23:39:40 old-k8s-version-820414 kubelet[663]: E0803 23:39:40.459518     663 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:old-k8s-version-820414" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-820414' and this object
	W0803 23:45:19.313244 1389119 logs.go:138] Found kubelet problem: Aug 03 23:39:40 old-k8s-version-820414 kubelet[663]: E0803 23:39:40.459607     663 reflector.go:138] object-"kube-system"/"kube-proxy-token-mfhhp": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kube-proxy-token-mfhhp" is forbidden: User "system:node:old-k8s-version-820414" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-820414' and this object
	W0803 23:45:19.313461 1389119 logs.go:138] Found kubelet problem: Aug 03 23:39:40 old-k8s-version-820414 kubelet[663]: E0803 23:39:40.459825     663 reflector.go:138] object-"kube-system"/"kindnet-token-ghz8l": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kindnet-token-ghz8l" is forbidden: User "system:node:old-k8s-version-820414" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-820414' and this object
	W0803 23:45:19.313744 1389119 logs.go:138] Found kubelet problem: Aug 03 23:39:40 old-k8s-version-820414 kubelet[663]: E0803 23:39:40.470624     663 reflector.go:138] object-"kube-system"/"storage-provisioner-token-xnstr": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "storage-provisioner-token-xnstr" is forbidden: User "system:node:old-k8s-version-820414" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-820414' and this object
	W0803 23:45:19.313958 1389119 logs.go:138] Found kubelet problem: Aug 03 23:39:40 old-k8s-version-820414 kubelet[663]: E0803 23:39:40.470708     663 reflector.go:138] object-"default"/"default-token-2m78r": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "default-token-2m78r" is forbidden: User "system:node:old-k8s-version-820414" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'old-k8s-version-820414' and this object
	W0803 23:45:19.314180 1389119 logs.go:138] Found kubelet problem: Aug 03 23:39:40 old-k8s-version-820414 kubelet[663]: E0803 23:39:40.470783     663 reflector.go:138] object-"kube-system"/"metrics-server-token-nxdwh": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "metrics-server-token-nxdwh" is forbidden: User "system:node:old-k8s-version-820414" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-820414' and this object
	W0803 23:45:19.321969 1389119 logs.go:138] Found kubelet problem: Aug 03 23:39:42 old-k8s-version-820414 kubelet[663]: E0803 23:39:42.798080     663 pod_workers.go:191] Error syncing pod fb610adc-a764-488d-83dd-61cc38a45f5f ("metrics-server-9975d5f86-wm57q_kube-system(fb610adc-a764-488d-83dd-61cc38a45f5f)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0803 23:45:19.322165 1389119 logs.go:138] Found kubelet problem: Aug 03 23:39:42 old-k8s-version-820414 kubelet[663]: E0803 23:39:42.847187     663 pod_workers.go:191] Error syncing pod fb610adc-a764-488d-83dd-61cc38a45f5f ("metrics-server-9975d5f86-wm57q_kube-system(fb610adc-a764-488d-83dd-61cc38a45f5f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0803 23:45:19.325787 1389119 logs.go:138] Found kubelet problem: Aug 03 23:39:55 old-k8s-version-820414 kubelet[663]: E0803 23:39:55.456344     663 pod_workers.go:191] Error syncing pod fb610adc-a764-488d-83dd-61cc38a45f5f ("metrics-server-9975d5f86-wm57q_kube-system(fb610adc-a764-488d-83dd-61cc38a45f5f)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0803 23:45:19.327539 1389119 logs.go:138] Found kubelet problem: Aug 03 23:40:09 old-k8s-version-820414 kubelet[663]: E0803 23:40:09.485277     663 pod_workers.go:191] Error syncing pod fb610adc-a764-488d-83dd-61cc38a45f5f ("metrics-server-9975d5f86-wm57q_kube-system(fb610adc-a764-488d-83dd-61cc38a45f5f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0803 23:45:19.328844 1389119 logs.go:138] Found kubelet problem: Aug 03 23:40:12 old-k8s-version-820414 kubelet[663]: E0803 23:40:12.189775     663 pod_workers.go:191] Error syncing pod ffd039c1-683a-48d6-aaaa-75fd4714221a ("dashboard-metrics-scraper-8d5bb5db8-lfzsx_kubernetes-dashboard(ffd039c1-683a-48d6-aaaa-75fd4714221a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-lfzsx_kubernetes-dashboard(ffd039c1-683a-48d6-aaaa-75fd4714221a)"
	W0803 23:45:19.329196 1389119 logs.go:138] Found kubelet problem: Aug 03 23:40:13 old-k8s-version-820414 kubelet[663]: E0803 23:40:13.184996     663 pod_workers.go:191] Error syncing pod ffd039c1-683a-48d6-aaaa-75fd4714221a ("dashboard-metrics-scraper-8d5bb5db8-lfzsx_kubernetes-dashboard(ffd039c1-683a-48d6-aaaa-75fd4714221a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-lfzsx_kubernetes-dashboard(ffd039c1-683a-48d6-aaaa-75fd4714221a)"
	W0803 23:45:19.329642 1389119 logs.go:138] Found kubelet problem: Aug 03 23:40:15 old-k8s-version-820414 kubelet[663]: E0803 23:40:15.194567     663 pod_workers.go:191] Error syncing pod 760afa3c-130b-47d5-a942-ae27ff7ac5f5 ("storage-provisioner_kube-system(760afa3c-130b-47d5-a942-ae27ff7ac5f5)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(760afa3c-130b-47d5-a942-ae27ff7ac5f5)"
	W0803 23:45:19.329978 1389119 logs.go:138] Found kubelet problem: Aug 03 23:40:16 old-k8s-version-820414 kubelet[663]: E0803 23:40:16.619029     663 pod_workers.go:191] Error syncing pod ffd039c1-683a-48d6-aaaa-75fd4714221a ("dashboard-metrics-scraper-8d5bb5db8-lfzsx_kubernetes-dashboard(ffd039c1-683a-48d6-aaaa-75fd4714221a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-lfzsx_kubernetes-dashboard(ffd039c1-683a-48d6-aaaa-75fd4714221a)"
	W0803 23:45:19.332858 1389119 logs.go:138] Found kubelet problem: Aug 03 23:40:23 old-k8s-version-820414 kubelet[663]: E0803 23:40:23.484273     663 pod_workers.go:191] Error syncing pod fb610adc-a764-488d-83dd-61cc38a45f5f ("metrics-server-9975d5f86-wm57q_kube-system(fb610adc-a764-488d-83dd-61cc38a45f5f)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0803 23:45:19.333589 1389119 logs.go:138] Found kubelet problem: Aug 03 23:40:30 old-k8s-version-820414 kubelet[663]: E0803 23:40:30.262767     663 pod_workers.go:191] Error syncing pod ffd039c1-683a-48d6-aaaa-75fd4714221a ("dashboard-metrics-scraper-8d5bb5db8-lfzsx_kubernetes-dashboard(ffd039c1-683a-48d6-aaaa-75fd4714221a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-lfzsx_kubernetes-dashboard(ffd039c1-683a-48d6-aaaa-75fd4714221a)"
	W0803 23:45:19.333779 1389119 logs.go:138] Found kubelet problem: Aug 03 23:40:36 old-k8s-version-820414 kubelet[663]: E0803 23:40:36.524198     663 pod_workers.go:191] Error syncing pod fb610adc-a764-488d-83dd-61cc38a45f5f ("metrics-server-9975d5f86-wm57q_kube-system(fb610adc-a764-488d-83dd-61cc38a45f5f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0803 23:45:19.334110 1389119 logs.go:138] Found kubelet problem: Aug 03 23:40:36 old-k8s-version-820414 kubelet[663]: E0803 23:40:36.615532     663 pod_workers.go:191] Error syncing pod ffd039c1-683a-48d6-aaaa-75fd4714221a ("dashboard-metrics-scraper-8d5bb5db8-lfzsx_kubernetes-dashboard(ffd039c1-683a-48d6-aaaa-75fd4714221a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-lfzsx_kubernetes-dashboard(ffd039c1-683a-48d6-aaaa-75fd4714221a)"
	W0803 23:45:19.334443 1389119 logs.go:138] Found kubelet problem: Aug 03 23:40:48 old-k8s-version-820414 kubelet[663]: E0803 23:40:48.444598     663 pod_workers.go:191] Error syncing pod ffd039c1-683a-48d6-aaaa-75fd4714221a ("dashboard-metrics-scraper-8d5bb5db8-lfzsx_kubernetes-dashboard(ffd039c1-683a-48d6-aaaa-75fd4714221a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-lfzsx_kubernetes-dashboard(ffd039c1-683a-48d6-aaaa-75fd4714221a)"
	W0803 23:45:19.334633 1389119 logs.go:138] Found kubelet problem: Aug 03 23:40:49 old-k8s-version-820414 kubelet[663]: E0803 23:40:49.445255     663 pod_workers.go:191] Error syncing pod fb610adc-a764-488d-83dd-61cc38a45f5f ("metrics-server-9975d5f86-wm57q_kube-system(fb610adc-a764-488d-83dd-61cc38a45f5f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0803 23:45:19.334820 1389119 logs.go:138] Found kubelet problem: Aug 03 23:41:03 old-k8s-version-820414 kubelet[663]: E0803 23:41:03.444496     663 pod_workers.go:191] Error syncing pod fb610adc-a764-488d-83dd-61cc38a45f5f ("metrics-server-9975d5f86-wm57q_kube-system(fb610adc-a764-488d-83dd-61cc38a45f5f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0803 23:45:19.335421 1389119 logs.go:138] Found kubelet problem: Aug 03 23:41:04 old-k8s-version-820414 kubelet[663]: E0803 23:41:04.368191     663 pod_workers.go:191] Error syncing pod ffd039c1-683a-48d6-aaaa-75fd4714221a ("dashboard-metrics-scraper-8d5bb5db8-lfzsx_kubernetes-dashboard(ffd039c1-683a-48d6-aaaa-75fd4714221a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-lfzsx_kubernetes-dashboard(ffd039c1-683a-48d6-aaaa-75fd4714221a)"
	W0803 23:45:19.335758 1389119 logs.go:138] Found kubelet problem: Aug 03 23:41:06 old-k8s-version-820414 kubelet[663]: E0803 23:41:06.609272     663 pod_workers.go:191] Error syncing pod ffd039c1-683a-48d6-aaaa-75fd4714221a ("dashboard-metrics-scraper-8d5bb5db8-lfzsx_kubernetes-dashboard(ffd039c1-683a-48d6-aaaa-75fd4714221a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-lfzsx_kubernetes-dashboard(ffd039c1-683a-48d6-aaaa-75fd4714221a)"
	W0803 23:45:19.338272 1389119 logs.go:138] Found kubelet problem: Aug 03 23:41:15 old-k8s-version-820414 kubelet[663]: E0803 23:41:15.481112     663 pod_workers.go:191] Error syncing pod fb610adc-a764-488d-83dd-61cc38a45f5f ("metrics-server-9975d5f86-wm57q_kube-system(fb610adc-a764-488d-83dd-61cc38a45f5f)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0803 23:45:19.338607 1389119 logs.go:138] Found kubelet problem: Aug 03 23:41:20 old-k8s-version-820414 kubelet[663]: E0803 23:41:20.444068     663 pod_workers.go:191] Error syncing pod ffd039c1-683a-48d6-aaaa-75fd4714221a ("dashboard-metrics-scraper-8d5bb5db8-lfzsx_kubernetes-dashboard(ffd039c1-683a-48d6-aaaa-75fd4714221a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-lfzsx_kubernetes-dashboard(ffd039c1-683a-48d6-aaaa-75fd4714221a)"
	W0803 23:45:19.338795 1389119 logs.go:138] Found kubelet problem: Aug 03 23:41:26 old-k8s-version-820414 kubelet[663]: E0803 23:41:26.444477     663 pod_workers.go:191] Error syncing pod fb610adc-a764-488d-83dd-61cc38a45f5f ("metrics-server-9975d5f86-wm57q_kube-system(fb610adc-a764-488d-83dd-61cc38a45f5f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0803 23:45:19.339129 1389119 logs.go:138] Found kubelet problem: Aug 03 23:41:35 old-k8s-version-820414 kubelet[663]: E0803 23:41:35.444186     663 pod_workers.go:191] Error syncing pod ffd039c1-683a-48d6-aaaa-75fd4714221a ("dashboard-metrics-scraper-8d5bb5db8-lfzsx_kubernetes-dashboard(ffd039c1-683a-48d6-aaaa-75fd4714221a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-lfzsx_kubernetes-dashboard(ffd039c1-683a-48d6-aaaa-75fd4714221a)"
	W0803 23:45:19.339317 1389119 logs.go:138] Found kubelet problem: Aug 03 23:41:38 old-k8s-version-820414 kubelet[663]: E0803 23:41:38.444525     663 pod_workers.go:191] Error syncing pod fb610adc-a764-488d-83dd-61cc38a45f5f ("metrics-server-9975d5f86-wm57q_kube-system(fb610adc-a764-488d-83dd-61cc38a45f5f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0803 23:45:19.339919 1389119 logs.go:138] Found kubelet problem: Aug 03 23:41:47 old-k8s-version-820414 kubelet[663]: E0803 23:41:47.552369     663 pod_workers.go:191] Error syncing pod ffd039c1-683a-48d6-aaaa-75fd4714221a ("dashboard-metrics-scraper-8d5bb5db8-lfzsx_kubernetes-dashboard(ffd039c1-683a-48d6-aaaa-75fd4714221a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-lfzsx_kubernetes-dashboard(ffd039c1-683a-48d6-aaaa-75fd4714221a)"
	W0803 23:45:19.340105 1389119 logs.go:138] Found kubelet problem: Aug 03 23:41:52 old-k8s-version-820414 kubelet[663]: E0803 23:41:52.444351     663 pod_workers.go:191] Error syncing pod fb610adc-a764-488d-83dd-61cc38a45f5f ("metrics-server-9975d5f86-wm57q_kube-system(fb610adc-a764-488d-83dd-61cc38a45f5f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0803 23:45:19.340438 1389119 logs.go:138] Found kubelet problem: Aug 03 23:41:56 old-k8s-version-820414 kubelet[663]: E0803 23:41:56.604520     663 pod_workers.go:191] Error syncing pod ffd039c1-683a-48d6-aaaa-75fd4714221a ("dashboard-metrics-scraper-8d5bb5db8-lfzsx_kubernetes-dashboard(ffd039c1-683a-48d6-aaaa-75fd4714221a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-lfzsx_kubernetes-dashboard(ffd039c1-683a-48d6-aaaa-75fd4714221a)"
	W0803 23:45:19.340631 1389119 logs.go:138] Found kubelet problem: Aug 03 23:42:07 old-k8s-version-820414 kubelet[663]: E0803 23:42:07.444300     663 pod_workers.go:191] Error syncing pod fb610adc-a764-488d-83dd-61cc38a45f5f ("metrics-server-9975d5f86-wm57q_kube-system(fb610adc-a764-488d-83dd-61cc38a45f5f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0803 23:45:19.340975 1389119 logs.go:138] Found kubelet problem: Aug 03 23:42:11 old-k8s-version-820414 kubelet[663]: E0803 23:42:11.444597     663 pod_workers.go:191] Error syncing pod ffd039c1-683a-48d6-aaaa-75fd4714221a ("dashboard-metrics-scraper-8d5bb5db8-lfzsx_kubernetes-dashboard(ffd039c1-683a-48d6-aaaa-75fd4714221a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-lfzsx_kubernetes-dashboard(ffd039c1-683a-48d6-aaaa-75fd4714221a)"
	W0803 23:45:19.341293 1389119 logs.go:138] Found kubelet problem: Aug 03 23:42:22 old-k8s-version-820414 kubelet[663]: E0803 23:42:22.444598     663 pod_workers.go:191] Error syncing pod fb610adc-a764-488d-83dd-61cc38a45f5f ("metrics-server-9975d5f86-wm57q_kube-system(fb610adc-a764-488d-83dd-61cc38a45f5f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0803 23:45:19.341492 1389119 logs.go:138] Found kubelet problem: Aug 03 23:42:22 old-k8s-version-820414 kubelet[663]: E0803 23:42:22.444840     663 pod_workers.go:191] Error syncing pod ffd039c1-683a-48d6-aaaa-75fd4714221a ("dashboard-metrics-scraper-8d5bb5db8-lfzsx_kubernetes-dashboard(ffd039c1-683a-48d6-aaaa-75fd4714221a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-lfzsx_kubernetes-dashboard(ffd039c1-683a-48d6-aaaa-75fd4714221a)"
	W0803 23:45:19.341821 1389119 logs.go:138] Found kubelet problem: Aug 03 23:42:34 old-k8s-version-820414 kubelet[663]: E0803 23:42:34.444102     663 pod_workers.go:191] Error syncing pod ffd039c1-683a-48d6-aaaa-75fd4714221a ("dashboard-metrics-scraper-8d5bb5db8-lfzsx_kubernetes-dashboard(ffd039c1-683a-48d6-aaaa-75fd4714221a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-lfzsx_kubernetes-dashboard(ffd039c1-683a-48d6-aaaa-75fd4714221a)"
	W0803 23:45:19.344283 1389119 logs.go:138] Found kubelet problem: Aug 03 23:42:37 old-k8s-version-820414 kubelet[663]: E0803 23:42:37.453580     663 pod_workers.go:191] Error syncing pod fb610adc-a764-488d-83dd-61cc38a45f5f ("metrics-server-9975d5f86-wm57q_kube-system(fb610adc-a764-488d-83dd-61cc38a45f5f)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0803 23:45:19.344612 1389119 logs.go:138] Found kubelet problem: Aug 03 23:42:49 old-k8s-version-820414 kubelet[663]: E0803 23:42:49.444596     663 pod_workers.go:191] Error syncing pod ffd039c1-683a-48d6-aaaa-75fd4714221a ("dashboard-metrics-scraper-8d5bb5db8-lfzsx_kubernetes-dashboard(ffd039c1-683a-48d6-aaaa-75fd4714221a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-lfzsx_kubernetes-dashboard(ffd039c1-683a-48d6-aaaa-75fd4714221a)"
	W0803 23:45:19.344805 1389119 logs.go:138] Found kubelet problem: Aug 03 23:42:51 old-k8s-version-820414 kubelet[663]: E0803 23:42:51.445065     663 pod_workers.go:191] Error syncing pod fb610adc-a764-488d-83dd-61cc38a45f5f ("metrics-server-9975d5f86-wm57q_kube-system(fb610adc-a764-488d-83dd-61cc38a45f5f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0803 23:45:19.345144 1389119 logs.go:138] Found kubelet problem: Aug 03 23:43:02 old-k8s-version-820414 kubelet[663]: E0803 23:43:02.444071     663 pod_workers.go:191] Error syncing pod ffd039c1-683a-48d6-aaaa-75fd4714221a ("dashboard-metrics-scraper-8d5bb5db8-lfzsx_kubernetes-dashboard(ffd039c1-683a-48d6-aaaa-75fd4714221a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-lfzsx_kubernetes-dashboard(ffd039c1-683a-48d6-aaaa-75fd4714221a)"
	W0803 23:45:19.345329 1389119 logs.go:138] Found kubelet problem: Aug 03 23:43:05 old-k8s-version-820414 kubelet[663]: E0803 23:43:05.444843     663 pod_workers.go:191] Error syncing pod fb610adc-a764-488d-83dd-61cc38a45f5f ("metrics-server-9975d5f86-wm57q_kube-system(fb610adc-a764-488d-83dd-61cc38a45f5f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0803 23:45:19.345923 1389119 logs.go:138] Found kubelet problem: Aug 03 23:43:16 old-k8s-version-820414 kubelet[663]: E0803 23:43:16.774818     663 pod_workers.go:191] Error syncing pod ffd039c1-683a-48d6-aaaa-75fd4714221a ("dashboard-metrics-scraper-8d5bb5db8-lfzsx_kubernetes-dashboard(ffd039c1-683a-48d6-aaaa-75fd4714221a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-lfzsx_kubernetes-dashboard(ffd039c1-683a-48d6-aaaa-75fd4714221a)"
	W0803 23:45:19.346108 1389119 logs.go:138] Found kubelet problem: Aug 03 23:43:18 old-k8s-version-820414 kubelet[663]: E0803 23:43:18.444453     663 pod_workers.go:191] Error syncing pod fb610adc-a764-488d-83dd-61cc38a45f5f ("metrics-server-9975d5f86-wm57q_kube-system(fb610adc-a764-488d-83dd-61cc38a45f5f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0803 23:45:19.346439 1389119 logs.go:138] Found kubelet problem: Aug 03 23:43:26 old-k8s-version-820414 kubelet[663]: E0803 23:43:26.604545     663 pod_workers.go:191] Error syncing pod ffd039c1-683a-48d6-aaaa-75fd4714221a ("dashboard-metrics-scraper-8d5bb5db8-lfzsx_kubernetes-dashboard(ffd039c1-683a-48d6-aaaa-75fd4714221a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-lfzsx_kubernetes-dashboard(ffd039c1-683a-48d6-aaaa-75fd4714221a)"
	W0803 23:45:19.346628 1389119 logs.go:138] Found kubelet problem: Aug 03 23:43:29 old-k8s-version-820414 kubelet[663]: E0803 23:43:29.445577     663 pod_workers.go:191] Error syncing pod fb610adc-a764-488d-83dd-61cc38a45f5f ("metrics-server-9975d5f86-wm57q_kube-system(fb610adc-a764-488d-83dd-61cc38a45f5f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0803 23:45:19.346959 1389119 logs.go:138] Found kubelet problem: Aug 03 23:43:41 old-k8s-version-820414 kubelet[663]: E0803 23:43:41.444779     663 pod_workers.go:191] Error syncing pod ffd039c1-683a-48d6-aaaa-75fd4714221a ("dashboard-metrics-scraper-8d5bb5db8-lfzsx_kubernetes-dashboard(ffd039c1-683a-48d6-aaaa-75fd4714221a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-lfzsx_kubernetes-dashboard(ffd039c1-683a-48d6-aaaa-75fd4714221a)"
	W0803 23:45:19.347145 1389119 logs.go:138] Found kubelet problem: Aug 03 23:43:44 old-k8s-version-820414 kubelet[663]: E0803 23:43:44.444419     663 pod_workers.go:191] Error syncing pod fb610adc-a764-488d-83dd-61cc38a45f5f ("metrics-server-9975d5f86-wm57q_kube-system(fb610adc-a764-488d-83dd-61cc38a45f5f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0803 23:45:19.347476 1389119 logs.go:138] Found kubelet problem: Aug 03 23:43:56 old-k8s-version-820414 kubelet[663]: E0803 23:43:56.444259     663 pod_workers.go:191] Error syncing pod ffd039c1-683a-48d6-aaaa-75fd4714221a ("dashboard-metrics-scraper-8d5bb5db8-lfzsx_kubernetes-dashboard(ffd039c1-683a-48d6-aaaa-75fd4714221a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-lfzsx_kubernetes-dashboard(ffd039c1-683a-48d6-aaaa-75fd4714221a)"
	W0803 23:45:19.347665 1389119 logs.go:138] Found kubelet problem: Aug 03 23:43:58 old-k8s-version-820414 kubelet[663]: E0803 23:43:58.444353     663 pod_workers.go:191] Error syncing pod fb610adc-a764-488d-83dd-61cc38a45f5f ("metrics-server-9975d5f86-wm57q_kube-system(fb610adc-a764-488d-83dd-61cc38a45f5f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0803 23:45:19.347998 1389119 logs.go:138] Found kubelet problem: Aug 03 23:44:08 old-k8s-version-820414 kubelet[663]: E0803 23:44:08.444136     663 pod_workers.go:191] Error syncing pod ffd039c1-683a-48d6-aaaa-75fd4714221a ("dashboard-metrics-scraper-8d5bb5db8-lfzsx_kubernetes-dashboard(ffd039c1-683a-48d6-aaaa-75fd4714221a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-lfzsx_kubernetes-dashboard(ffd039c1-683a-48d6-aaaa-75fd4714221a)"
	W0803 23:45:19.348185 1389119 logs.go:138] Found kubelet problem: Aug 03 23:44:12 old-k8s-version-820414 kubelet[663]: E0803 23:44:12.444406     663 pod_workers.go:191] Error syncing pod fb610adc-a764-488d-83dd-61cc38a45f5f ("metrics-server-9975d5f86-wm57q_kube-system(fb610adc-a764-488d-83dd-61cc38a45f5f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0803 23:45:19.348518 1389119 logs.go:138] Found kubelet problem: Aug 03 23:44:22 old-k8s-version-820414 kubelet[663]: E0803 23:44:22.444030     663 pod_workers.go:191] Error syncing pod ffd039c1-683a-48d6-aaaa-75fd4714221a ("dashboard-metrics-scraper-8d5bb5db8-lfzsx_kubernetes-dashboard(ffd039c1-683a-48d6-aaaa-75fd4714221a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-lfzsx_kubernetes-dashboard(ffd039c1-683a-48d6-aaaa-75fd4714221a)"
	W0803 23:45:19.348705 1389119 logs.go:138] Found kubelet problem: Aug 03 23:44:23 old-k8s-version-820414 kubelet[663]: E0803 23:44:23.449169     663 pod_workers.go:191] Error syncing pod fb610adc-a764-488d-83dd-61cc38a45f5f ("metrics-server-9975d5f86-wm57q_kube-system(fb610adc-a764-488d-83dd-61cc38a45f5f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0803 23:45:19.348906 1389119 logs.go:138] Found kubelet problem: Aug 03 23:44:35 old-k8s-version-820414 kubelet[663]: E0803 23:44:35.445923     663 pod_workers.go:191] Error syncing pod fb610adc-a764-488d-83dd-61cc38a45f5f ("metrics-server-9975d5f86-wm57q_kube-system(fb610adc-a764-488d-83dd-61cc38a45f5f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0803 23:45:19.349238 1389119 logs.go:138] Found kubelet problem: Aug 03 23:44:37 old-k8s-version-820414 kubelet[663]: E0803 23:44:37.444223     663 pod_workers.go:191] Error syncing pod ffd039c1-683a-48d6-aaaa-75fd4714221a ("dashboard-metrics-scraper-8d5bb5db8-lfzsx_kubernetes-dashboard(ffd039c1-683a-48d6-aaaa-75fd4714221a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-lfzsx_kubernetes-dashboard(ffd039c1-683a-48d6-aaaa-75fd4714221a)"
	W0803 23:45:19.349570 1389119 logs.go:138] Found kubelet problem: Aug 03 23:44:50 old-k8s-version-820414 kubelet[663]: E0803 23:44:50.444155     663 pod_workers.go:191] Error syncing pod ffd039c1-683a-48d6-aaaa-75fd4714221a ("dashboard-metrics-scraper-8d5bb5db8-lfzsx_kubernetes-dashboard(ffd039c1-683a-48d6-aaaa-75fd4714221a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-lfzsx_kubernetes-dashboard(ffd039c1-683a-48d6-aaaa-75fd4714221a)"
	W0803 23:45:19.349758 1389119 logs.go:138] Found kubelet problem: Aug 03 23:44:50 old-k8s-version-820414 kubelet[663]: E0803 23:44:50.445023     663 pod_workers.go:191] Error syncing pod fb610adc-a764-488d-83dd-61cc38a45f5f ("metrics-server-9975d5f86-wm57q_kube-system(fb610adc-a764-488d-83dd-61cc38a45f5f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0803 23:45:19.349944 1389119 logs.go:138] Found kubelet problem: Aug 03 23:45:02 old-k8s-version-820414 kubelet[663]: E0803 23:45:02.444334     663 pod_workers.go:191] Error syncing pod fb610adc-a764-488d-83dd-61cc38a45f5f ("metrics-server-9975d5f86-wm57q_kube-system(fb610adc-a764-488d-83dd-61cc38a45f5f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0803 23:45:19.350279 1389119 logs.go:138] Found kubelet problem: Aug 03 23:45:04 old-k8s-version-820414 kubelet[663]: E0803 23:45:04.444530     663 pod_workers.go:191] Error syncing pod ffd039c1-683a-48d6-aaaa-75fd4714221a ("dashboard-metrics-scraper-8d5bb5db8-lfzsx_kubernetes-dashboard(ffd039c1-683a-48d6-aaaa-75fd4714221a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-lfzsx_kubernetes-dashboard(ffd039c1-683a-48d6-aaaa-75fd4714221a)"
	W0803 23:45:19.350466 1389119 logs.go:138] Found kubelet problem: Aug 03 23:45:15 old-k8s-version-820414 kubelet[663]: E0803 23:45:15.444459     663 pod_workers.go:191] Error syncing pod fb610adc-a764-488d-83dd-61cc38a45f5f ("metrics-server-9975d5f86-wm57q_kube-system(fb610adc-a764-488d-83dd-61cc38a45f5f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0803 23:45:19.350799 1389119 logs.go:138] Found kubelet problem: Aug 03 23:45:17 old-k8s-version-820414 kubelet[663]: E0803 23:45:17.444534     663 pod_workers.go:191] Error syncing pod ffd039c1-683a-48d6-aaaa-75fd4714221a ("dashboard-metrics-scraper-8d5bb5db8-lfzsx_kubernetes-dashboard(ffd039c1-683a-48d6-aaaa-75fd4714221a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-lfzsx_kubernetes-dashboard(ffd039c1-683a-48d6-aaaa-75fd4714221a)"
	I0803 23:45:19.350809 1389119 logs.go:123] Gathering logs for kube-apiserver [2dfe55d1797db163cd9199214e216cbd925e66788ac3213f4189fd6d49eb137d] ...
	I0803 23:45:19.350823 1389119 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2dfe55d1797db163cd9199214e216cbd925e66788ac3213f4189fd6d49eb137d"
	I0803 23:45:19.404505 1389119 logs.go:123] Gathering logs for coredns [b8f07cb083173520f0df79356cae5f287397885843c428a87247794fa9b0ad02] ...
	I0803 23:45:19.404535 1389119 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b8f07cb083173520f0df79356cae5f287397885843c428a87247794fa9b0ad02"
	I0803 23:45:19.444242 1389119 logs.go:123] Gathering logs for coredns [1fb74e635c11f9e357abcbe302bff95b9c25381dfe33de15354e0eedbc5564f4] ...
	I0803 23:45:19.444324 1389119 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1fb74e635c11f9e357abcbe302bff95b9c25381dfe33de15354e0eedbc5564f4"
	I0803 23:45:19.497178 1389119 logs.go:123] Gathering logs for kube-proxy [aceae907dc5dd357ef34d25fba3386bee6214298dda34f31bb744dd99aaf02c5] ...
	I0803 23:45:19.497207 1389119 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 aceae907dc5dd357ef34d25fba3386bee6214298dda34f31bb744dd99aaf02c5"
	I0803 23:45:19.535467 1389119 logs.go:123] Gathering logs for kindnet [e8b4079036ee954a62ef0fc54baf2b399cac2f40b9f5d82804b0a0b63287f659] ...
	I0803 23:45:19.535495 1389119 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e8b4079036ee954a62ef0fc54baf2b399cac2f40b9f5d82804b0a0b63287f659"
	I0803 23:45:19.590796 1389119 logs.go:123] Gathering logs for container status ...
	I0803 23:45:19.590831 1389119 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0803 23:45:19.664429 1389119 logs.go:123] Gathering logs for dmesg ...
	I0803 23:45:19.664457 1389119 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0803 23:45:19.683161 1389119 logs.go:123] Gathering logs for kube-apiserver [fb5f0343694ff099d26f408ccb02c199901b58111f3e21cf3de3c23a210d7b6d] ...
	I0803 23:45:19.683187 1389119 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fb5f0343694ff099d26f408ccb02c199901b58111f3e21cf3de3c23a210d7b6d"
	I0803 23:45:19.762085 1389119 logs.go:123] Gathering logs for kube-proxy [9d8357e9b98e5a01f551902057365a6f43c698ef3ffda88640f432be5ca342dc] ...
	I0803 23:45:19.762122 1389119 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9d8357e9b98e5a01f551902057365a6f43c698ef3ffda88640f432be5ca342dc"
	I0803 23:45:19.801223 1389119 logs.go:123] Gathering logs for kubernetes-dashboard [32cdc2b56b59662ee956a5208730f2f361969d0adbd1439929e7e6c099a03838] ...
	I0803 23:45:19.801250 1389119 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 32cdc2b56b59662ee956a5208730f2f361969d0adbd1439929e7e6c099a03838"
	I0803 23:45:19.842438 1389119 logs.go:123] Gathering logs for storage-provisioner [fe656919428c126fa2c0d30d5ab782825c967bc723dfa769843ecf5bba788898] ...
	I0803 23:45:19.842463 1389119 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fe656919428c126fa2c0d30d5ab782825c967bc723dfa769843ecf5bba788898"
	I0803 23:45:19.881283 1389119 logs.go:123] Gathering logs for describe nodes ...
	I0803 23:45:19.881309 1389119 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0803 23:45:20.030350 1389119 logs.go:123] Gathering logs for kube-scheduler [9bf21e77a35b82c0eff0d1588e963d3c44c43c2c26455b9d13e89d93c07f4a7a] ...
	I0803 23:45:20.030384 1389119 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9bf21e77a35b82c0eff0d1588e963d3c44c43c2c26455b9d13e89d93c07f4a7a"
	I0803 23:45:20.078615 1389119 logs.go:123] Gathering logs for kube-scheduler [1cf5b72b69cb46a2f0ebd44d87f295610e0a42f9bfb29e2fb3e863c1645de1b9] ...
	I0803 23:45:20.078644 1389119 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1cf5b72b69cb46a2f0ebd44d87f295610e0a42f9bfb29e2fb3e863c1645de1b9"
	I0803 23:45:20.133111 1389119 logs.go:123] Gathering logs for kindnet [0d22eec037f1c7aab116e4aa78c05368658c4097a76b252c826555e1a25f0d34] ...
	I0803 23:45:20.133146 1389119 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0d22eec037f1c7aab116e4aa78c05368658c4097a76b252c826555e1a25f0d34"
	I0803 23:45:20.187580 1389119 logs.go:123] Gathering logs for etcd [17af5d5b36407e3b6155abcf2050b778e7fba88e3816192cc23567d11a94cc3e] ...
	I0803 23:45:20.187613 1389119 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 17af5d5b36407e3b6155abcf2050b778e7fba88e3816192cc23567d11a94cc3e"
	I0803 23:45:20.231981 1389119 out.go:304] Setting ErrFile to fd 2...
	I0803 23:45:20.232005 1389119 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0803 23:45:20.232080 1389119 out.go:239] X Problems detected in kubelet:
	W0803 23:45:20.232093 1389119 out.go:239]   Aug 03 23:44:50 old-k8s-version-820414 kubelet[663]: E0803 23:44:50.445023     663 pod_workers.go:191] Error syncing pod fb610adc-a764-488d-83dd-61cc38a45f5f ("metrics-server-9975d5f86-wm57q_kube-system(fb610adc-a764-488d-83dd-61cc38a45f5f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0803 23:45:20.232101 1389119 out.go:239]   Aug 03 23:45:02 old-k8s-version-820414 kubelet[663]: E0803 23:45:02.444334     663 pod_workers.go:191] Error syncing pod fb610adc-a764-488d-83dd-61cc38a45f5f ("metrics-server-9975d5f86-wm57q_kube-system(fb610adc-a764-488d-83dd-61cc38a45f5f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0803 23:45:20.232128 1389119 out.go:239]   Aug 03 23:45:04 old-k8s-version-820414 kubelet[663]: E0803 23:45:04.444530     663 pod_workers.go:191] Error syncing pod ffd039c1-683a-48d6-aaaa-75fd4714221a ("dashboard-metrics-scraper-8d5bb5db8-lfzsx_kubernetes-dashboard(ffd039c1-683a-48d6-aaaa-75fd4714221a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-lfzsx_kubernetes-dashboard(ffd039c1-683a-48d6-aaaa-75fd4714221a)"
	W0803 23:45:20.232153 1389119 out.go:239]   Aug 03 23:45:15 old-k8s-version-820414 kubelet[663]: E0803 23:45:15.444459     663 pod_workers.go:191] Error syncing pod fb610adc-a764-488d-83dd-61cc38a45f5f ("metrics-server-9975d5f86-wm57q_kube-system(fb610adc-a764-488d-83dd-61cc38a45f5f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0803 23:45:20.232159 1389119 out.go:239]   Aug 03 23:45:17 old-k8s-version-820414 kubelet[663]: E0803 23:45:17.444534     663 pod_workers.go:191] Error syncing pod ffd039c1-683a-48d6-aaaa-75fd4714221a ("dashboard-metrics-scraper-8d5bb5db8-lfzsx_kubernetes-dashboard(ffd039c1-683a-48d6-aaaa-75fd4714221a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-lfzsx_kubernetes-dashboard(ffd039c1-683a-48d6-aaaa-75fd4714221a)"
	I0803 23:45:20.232166 1389119 out.go:304] Setting ErrFile to fd 2...
	I0803 23:45:20.232171 1389119 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 23:45:21.103750 1397775 pod_ready.go:102] pod "metrics-server-6867b74b74-zs9xw" in "kube-system" namespace has status "Ready":"False"
	I0803 23:45:23.104465 1397775 pod_ready.go:102] pod "metrics-server-6867b74b74-zs9xw" in "kube-system" namespace has status "Ready":"False"
	I0803 23:45:25.603810 1397775 pod_ready.go:102] pod "metrics-server-6867b74b74-zs9xw" in "kube-system" namespace has status "Ready":"False"
	I0803 23:45:28.103519 1397775 pod_ready.go:102] pod "metrics-server-6867b74b74-zs9xw" in "kube-system" namespace has status "Ready":"False"
	I0803 23:45:30.232956 1389119 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0803 23:45:30.246240 1389119 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I0803 23:45:30.248504 1389119 out.go:177] 
	W0803 23:45:30.250531 1389119 out.go:239] X Exiting due to K8S_UNHEALTHY_CONTROL_PLANE: wait 6m0s for node: wait for healthy API server: controlPlane never updated to v1.20.0
	W0803 23:45:30.250566 1389119 out.go:239] * Suggestion: Control Plane could not update, try minikube delete --all --purge
	W0803 23:45:30.250585 1389119 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/11417
	W0803 23:45:30.250591 1389119 out.go:239] * 
	W0803 23:45:30.251796 1389119 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0803 23:45:30.253672 1389119 out.go:177] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                        ATTEMPT             POD ID              POD
	e9f478937b2bf       523cad1a4df73       2 minutes ago       Exited              dashboard-metrics-scraper   5                   0fd262d35dc5b       dashboard-metrics-scraper-8d5bb5db8-lfzsx
	fe656919428c1       ba04bb24b9575       5 minutes ago       Running             storage-provisioner         3                   ab3e060f055a2       storage-provisioner
	32cdc2b56b596       20b332c9a70d8       5 minutes ago       Running             kubernetes-dashboard        0                   27c2dde627e72       kubernetes-dashboard-cd95d586-8cz7j
	e8b4079036ee9       f42786f8afd22       5 minutes ago       Running             kindnet-cni                 1                   8069bf9a2778e       kindnet-gwx4j
	9d49e481b2b62       ba04bb24b9575       5 minutes ago       Exited              storage-provisioner         2                   ab3e060f055a2       storage-provisioner
	6b1488f10594b       1611cd07b61d5       5 minutes ago       Running             busybox                     1                   6f7facdd80e90       busybox
	aceae907dc5dd       25a5233254979       5 minutes ago       Running             kube-proxy                  1                   381bd00d85d56       kube-proxy-rgk96
	b8f07cb083173       db91994f4ee8f       5 minutes ago       Running             coredns                     1                   03c876d4d965f       coredns-74ff55c5b-xng8r
	9bf21e77a35b8       e7605f88f17d6       6 minutes ago       Running             kube-scheduler              1                   5421129b9e37e       kube-scheduler-old-k8s-version-820414
	decd46f1eb153       1df8a2b116bd1       6 minutes ago       Running             kube-controller-manager     1                   a7ba02e0f4906       kube-controller-manager-old-k8s-version-820414
	2dfe55d1797db       2c08bbbc02d3a       6 minutes ago       Running             kube-apiserver              1                   0b211151e17f0       kube-apiserver-old-k8s-version-820414
	5cbe8058c6723       05b738aa1bc63       6 minutes ago       Running             etcd                        1                   17fb2b2d378db       etcd-old-k8s-version-820414
	9ce162788b5a1       1611cd07b61d5       6 minutes ago       Exited              busybox                     0                   36edbaf416419       busybox
	1fb74e635c11f       db91994f4ee8f       7 minutes ago       Exited              coredns                     0                   ee0574a1f2ec1       coredns-74ff55c5b-xng8r
	0d22eec037f1c       f42786f8afd22       7 minutes ago       Exited              kindnet-cni                 0                   df034e80af7e1       kindnet-gwx4j
	9d8357e9b98e5       25a5233254979       7 minutes ago       Exited              kube-proxy                  0                   7f57945672dd2       kube-proxy-rgk96
	e15be75c9111b       1df8a2b116bd1       8 minutes ago       Exited              kube-controller-manager     0                   dd679d230f9b5       kube-controller-manager-old-k8s-version-820414
	fb5f0343694ff       2c08bbbc02d3a       8 minutes ago       Exited              kube-apiserver              0                   447b07e01ddfe       kube-apiserver-old-k8s-version-820414
	17af5d5b36407       05b738aa1bc63       8 minutes ago       Exited              etcd                        0                   90c4b66b4f9f6       etcd-old-k8s-version-820414
	1cf5b72b69cb4       e7605f88f17d6       8 minutes ago       Exited              kube-scheduler              0                   ce3ebe3897896       kube-scheduler-old-k8s-version-820414
	
	
	==> containerd <==
	Aug 03 23:41:46 old-k8s-version-820414 containerd[571]: time="2024-08-03T23:41:46.481527460Z" level=info msg="StartContainer for \"e72990a3698e7bfdd2bfad1d95387715f047cc682d15c971681a91e8c9d80456\""
	Aug 03 23:41:46 old-k8s-version-820414 containerd[571]: time="2024-08-03T23:41:46.592464461Z" level=info msg="StartContainer for \"e72990a3698e7bfdd2bfad1d95387715f047cc682d15c971681a91e8c9d80456\" returns successfully"
	Aug 03 23:41:46 old-k8s-version-820414 containerd[571]: time="2024-08-03T23:41:46.627576730Z" level=info msg="shim disconnected" id=e72990a3698e7bfdd2bfad1d95387715f047cc682d15c971681a91e8c9d80456 namespace=k8s.io
	Aug 03 23:41:46 old-k8s-version-820414 containerd[571]: time="2024-08-03T23:41:46.627641853Z" level=warning msg="cleaning up after shim disconnected" id=e72990a3698e7bfdd2bfad1d95387715f047cc682d15c971681a91e8c9d80456 namespace=k8s.io
	Aug 03 23:41:46 old-k8s-version-820414 containerd[571]: time="2024-08-03T23:41:46.627654333Z" level=info msg="cleaning up dead shim" namespace=k8s.io
	Aug 03 23:41:46 old-k8s-version-820414 containerd[571]: time="2024-08-03T23:41:46.642424129Z" level=warning msg="cleanup warnings time=\"2024-08-03T23:41:46Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io
	Aug 03 23:41:47 old-k8s-version-820414 containerd[571]: time="2024-08-03T23:41:47.553789293Z" level=info msg="RemoveContainer for \"dca53f66aa1aef008379d8c0adc393f207dd912d52baca794d3554ddcfa6ba8b\""
	Aug 03 23:41:47 old-k8s-version-820414 containerd[571]: time="2024-08-03T23:41:47.564987833Z" level=info msg="RemoveContainer for \"dca53f66aa1aef008379d8c0adc393f207dd912d52baca794d3554ddcfa6ba8b\" returns successfully"
	Aug 03 23:42:37 old-k8s-version-820414 containerd[571]: time="2024-08-03T23:42:37.445003928Z" level=info msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Aug 03 23:42:37 old-k8s-version-820414 containerd[571]: time="2024-08-03T23:42:37.450533323Z" level=info msg="trying next host" error="failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host" host=fake.domain
	Aug 03 23:42:37 old-k8s-version-820414 containerd[571]: time="2024-08-03T23:42:37.452198754Z" level=error msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\" failed" error="failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	Aug 03 23:42:37 old-k8s-version-820414 containerd[571]: time="2024-08-03T23:42:37.452284202Z" level=info msg="stop pulling image fake.domain/registry.k8s.io/echoserver:1.4: active requests=0, bytes read=0"
	Aug 03 23:43:16 old-k8s-version-820414 containerd[571]: time="2024-08-03T23:43:16.446028951Z" level=info msg="CreateContainer within sandbox \"0fd262d35dc5bf52833efcc3412d7e137518ca9bb5ffa1d68728b46b7f84fdd5\" for container name:\"dashboard-metrics-scraper\" attempt:5"
	Aug 03 23:43:16 old-k8s-version-820414 containerd[571]: time="2024-08-03T23:43:16.461734532Z" level=info msg="CreateContainer within sandbox \"0fd262d35dc5bf52833efcc3412d7e137518ca9bb5ffa1d68728b46b7f84fdd5\" for name:\"dashboard-metrics-scraper\" attempt:5 returns container id \"e9f478937b2bff489b7ea0b9e7c88f8afe428985a501ab678646efdde70637b1\""
	Aug 03 23:43:16 old-k8s-version-820414 containerd[571]: time="2024-08-03T23:43:16.462440277Z" level=info msg="StartContainer for \"e9f478937b2bff489b7ea0b9e7c88f8afe428985a501ab678646efdde70637b1\""
	Aug 03 23:43:16 old-k8s-version-820414 containerd[571]: time="2024-08-03T23:43:16.543565162Z" level=info msg="StartContainer for \"e9f478937b2bff489b7ea0b9e7c88f8afe428985a501ab678646efdde70637b1\" returns successfully"
	Aug 03 23:43:16 old-k8s-version-820414 containerd[571]: time="2024-08-03T23:43:16.569418406Z" level=info msg="shim disconnected" id=e9f478937b2bff489b7ea0b9e7c88f8afe428985a501ab678646efdde70637b1 namespace=k8s.io
	Aug 03 23:43:16 old-k8s-version-820414 containerd[571]: time="2024-08-03T23:43:16.569481520Z" level=warning msg="cleaning up after shim disconnected" id=e9f478937b2bff489b7ea0b9e7c88f8afe428985a501ab678646efdde70637b1 namespace=k8s.io
	Aug 03 23:43:16 old-k8s-version-820414 containerd[571]: time="2024-08-03T23:43:16.569493442Z" level=info msg="cleaning up dead shim" namespace=k8s.io
	Aug 03 23:43:16 old-k8s-version-820414 containerd[571]: time="2024-08-03T23:43:16.776159495Z" level=info msg="RemoveContainer for \"e72990a3698e7bfdd2bfad1d95387715f047cc682d15c971681a91e8c9d80456\""
	Aug 03 23:43:16 old-k8s-version-820414 containerd[571]: time="2024-08-03T23:43:16.781878766Z" level=info msg="RemoveContainer for \"e72990a3698e7bfdd2bfad1d95387715f047cc682d15c971681a91e8c9d80456\" returns successfully"
	Aug 03 23:45:27 old-k8s-version-820414 containerd[571]: time="2024-08-03T23:45:27.444903693Z" level=info msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Aug 03 23:45:27 old-k8s-version-820414 containerd[571]: time="2024-08-03T23:45:27.459574267Z" level=info msg="trying next host" error="failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host" host=fake.domain
	Aug 03 23:45:27 old-k8s-version-820414 containerd[571]: time="2024-08-03T23:45:27.460912651Z" level=error msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\" failed" error="failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	Aug 03 23:45:27 old-k8s-version-820414 containerd[571]: time="2024-08-03T23:45:27.460964482Z" level=info msg="stop pulling image fake.domain/registry.k8s.io/echoserver:1.4: active requests=0, bytes read=0"
	
	
	==> coredns [1fb74e635c11f9e357abcbe302bff95b9c25381dfe33de15354e0eedbc5564f4] <==
	.:53
	[INFO] plugin/reload: Running configuration MD5 = b494d968e357ba1b925cee838fbd78ed
	CoreDNS-1.7.0
	linux/arm64, go1.14.4, f59c03d
	[INFO] 127.0.0.1:43427 - 9544 "HINFO IN 8098719087605249509.7662860429026608624. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.022468575s
	
	
	==> coredns [b8f07cb083173520f0df79356cae5f287397885843c428a87247794fa9b0ad02] <==
	I0803 23:40:13.700212       1 trace.go:116] Trace[2019727887]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125 (started: 2024-08-03 23:39:43.696444182 +0000 UTC m=+0.101892922) (total time: 30.003619019s):
	Trace[2019727887]: [30.003619019s] [30.003619019s] END
	E0803 23:40:13.701007       1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	I0803 23:40:13.700892       1 trace.go:116] Trace[1427131847]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125 (started: 2024-08-03 23:39:43.700243211 +0000 UTC m=+0.105691951) (total time: 30.00063049s):
	Trace[1427131847]: [30.00063049s] [30.00063049s] END
	E0803 23:40:13.701307       1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	I0803 23:40:13.700976       1 trace.go:116] Trace[939984059]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125 (started: 2024-08-03 23:39:43.700541392 +0000 UTC m=+0.105990124) (total time: 30.000423943s):
	Trace[939984059]: [30.000423943s] [30.000423943s] END
	E0803 23:40:13.701536       1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Endpoints: Get "https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	.:53
	[INFO] plugin/reload: Running configuration MD5 = b494d968e357ba1b925cee838fbd78ed
	CoreDNS-1.7.0
	linux/arm64, go1.14.4, f59c03d
	[INFO] 127.0.0.1:55731 - 57194 "HINFO IN 4734433355628666549.3525272187591717241. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.021564694s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               old-k8s-version-820414
	Roles:              control-plane,master
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=old-k8s-version-820414
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b1de62d5257af3586cb63b8c779e46d9f9bc0082
	                    minikube.k8s.io/name=old-k8s-version-820414
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_03T23_37_38_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node-role.kubernetes.io/master=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 03 Aug 2024 23:37:34 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-820414
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 03 Aug 2024 23:45:23 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 03 Aug 2024 23:40:31 +0000   Sat, 03 Aug 2024 23:37:28 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 03 Aug 2024 23:40:31 +0000   Sat, 03 Aug 2024 23:37:28 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 03 Aug 2024 23:40:31 +0000   Sat, 03 Aug 2024 23:37:28 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 03 Aug 2024 23:40:31 +0000   Sat, 03 Aug 2024 23:37:53 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    old-k8s-version-820414
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022360Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022360Ki
	  pods:               110
	System Info:
	  Machine ID:                 bb8177524c944613a36db777e8a64971
	  System UUID:                bec97393-fdd7-4da8-bf32-e04beecab7af
	  Boot ID:                    7d37f827-388f-4261-892f-42defe929bba
	  Kernel Version:             5.15.0-1066-aws
	  OS Image:                   Ubuntu 22.04.4 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  containerd://1.7.19
	  Kubelet Version:            v1.20.0
	  Kube-Proxy Version:         v1.20.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m39s
	  kube-system                 coredns-74ff55c5b-xng8r                           100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (2%!)(MISSING)     7m38s
	  kube-system                 etcd-old-k8s-version-820414                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         7m45s
	  kube-system                 kindnet-gwx4j                                     100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      7m38s
	  kube-system                 kube-apiserver-old-k8s-version-820414             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m45s
	  kube-system                 kube-controller-manager-old-k8s-version-820414    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m45s
	  kube-system                 kube-proxy-rgk96                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m38s
	  kube-system                 kube-scheduler-old-k8s-version-820414             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m45s
	  kube-system                 metrics-server-9975d5f86-wm57q                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (2%!)(MISSING)       0 (0%!)(MISSING)         6m29s
	  kube-system                 storage-provisioner                               0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m36s
	  kubernetes-dashboard        dashboard-metrics-scraper-8d5bb5db8-lfzsx         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m32s
	  kubernetes-dashboard        kubernetes-dashboard-cd95d586-8cz7j               0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m32s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (47%!)(MISSING)  100m (5%!)(MISSING)
	  memory             420Mi (5%!)(MISSING)  220Mi (2%!)(MISSING)
	  ephemeral-storage  100Mi (0%!)(MISSING)  0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From        Message
	  ----    ------                   ----                 ----        -------
	  Normal  NodeHasSufficientMemory  8m5s (x5 over 8m5s)  kubelet     Node old-k8s-version-820414 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m5s (x4 over 8m5s)  kubelet     Node old-k8s-version-820414 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m5s (x4 over 8m5s)  kubelet     Node old-k8s-version-820414 status is now: NodeHasSufficientPID
	  Normal  Starting                 7m45s                kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  7m45s                kubelet     Node old-k8s-version-820414 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m45s                kubelet     Node old-k8s-version-820414 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m45s                kubelet     Node old-k8s-version-820414 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  7m45s                kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                7m38s                kubelet     Node old-k8s-version-820414 status is now: NodeReady
	  Normal  Starting                 7m35s                kube-proxy  Starting kube-proxy.
	  Normal  Starting                 6m2s                 kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  6m2s (x7 over 6m2s)  kubelet     Node old-k8s-version-820414 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m2s (x8 over 6m2s)  kubelet     Node old-k8s-version-820414 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m2s (x8 over 6m2s)  kubelet     Node old-k8s-version-820414 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  6m2s                 kubelet     Updated Node Allocatable limit across pods
	  Normal  Starting                 5m47s                kube-proxy  Starting kube-proxy.
	
	
	==> dmesg <==
	[  +0.000944] FS-Cache: O-key=[8] 'd0405c0100000000'
	[  +0.000642] FS-Cache: N-cookie c=00000054 [p=0000004b fl=2 nc=0 na=1]
	[  +0.000872] FS-Cache: N-cookie d=0000000061b9e6aa{9p.inode} n=0000000031d671ab
	[  +0.000944] FS-Cache: N-key=[8] 'd0405c0100000000'
	[  +0.002903] FS-Cache: Duplicate cookie detected
	[  +0.000655] FS-Cache: O-cookie c=0000004e [p=0000004b fl=226 nc=0 na=1]
	[  +0.000904] FS-Cache: O-cookie d=0000000061b9e6aa{9p.inode} n=00000000fdae399f
	[  +0.000945] FS-Cache: O-key=[8] 'd0405c0100000000'
	[  +0.000669] FS-Cache: N-cookie c=00000055 [p=0000004b fl=2 nc=0 na=1]
	[  +0.000838] FS-Cache: N-cookie d=0000000061b9e6aa{9p.inode} n=00000000ba0a0cce
	[  +0.000952] FS-Cache: N-key=[8] 'd0405c0100000000'
	[  +3.720095] FS-Cache: Duplicate cookie detected
	[  +0.000655] FS-Cache: O-cookie c=0000004c [p=0000004b fl=226 nc=0 na=1]
	[  +0.000877] FS-Cache: O-cookie d=0000000061b9e6aa{9p.inode} n=000000001e359f0a
	[  +0.000965] FS-Cache: O-key=[8] 'cf405c0100000000'
	[  +0.000672] FS-Cache: N-cookie c=00000057 [p=0000004b fl=2 nc=0 na=1]
	[  +0.000866] FS-Cache: N-cookie d=0000000061b9e6aa{9p.inode} n=00000000e7e3c347
	[  +0.000947] FS-Cache: N-key=[8] 'cf405c0100000000'
	[  +0.298964] FS-Cache: Duplicate cookie detected
	[  +0.000648] FS-Cache: O-cookie c=00000051 [p=0000004b fl=226 nc=0 na=1]
	[  +0.000878] FS-Cache: O-cookie d=0000000061b9e6aa{9p.inode} n=0000000035df286d
	[  +0.001251] FS-Cache: O-key=[8] 'd7405c0100000000'
	[  +0.000909] FS-Cache: N-cookie c=00000058 [p=0000004b fl=2 nc=0 na=1]
	[  +0.000945] FS-Cache: N-cookie d=0000000061b9e6aa{9p.inode} n=00000000fed31e81
	[  +0.001182] FS-Cache: N-key=[8] 'd7405c0100000000'
	
	
	==> etcd [17af5d5b36407e3b6155abcf2050b778e7fba88e3816192cc23567d11a94cc3e] <==
	2024-08-03 23:37:27.481122 I | embed: listening for peers on 192.168.76.2:2380
	2024-08-03 23:37:27.481238 I | etcdserver: ea7e25599daad906 as single-node; fast-forwarding 9 ticks (election ticks 10)
	raft2024/08/03 23:37:27 INFO: ea7e25599daad906 switched to configuration voters=(16896983918768216326)
	2024-08-03 23:37:27.481717 I | etcdserver/membership: added member ea7e25599daad906 [https://192.168.76.2:2380] to cluster 6f20f2c4b2fb5f8a
	raft2024/08/03 23:37:27 INFO: ea7e25599daad906 is starting a new election at term 1
	raft2024/08/03 23:37:27 INFO: ea7e25599daad906 became candidate at term 2
	raft2024/08/03 23:37:27 INFO: ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 2
	raft2024/08/03 23:37:27 INFO: ea7e25599daad906 became leader at term 2
	raft2024/08/03 23:37:27 INFO: raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 2
	2024-08-03 23:37:27.668628 I | etcdserver: setting up the initial cluster version to 3.4
	2024-08-03 23:37:27.670155 N | etcdserver/membership: set the initial cluster version to 3.4
	2024-08-03 23:37:27.670494 I | etcdserver/api: enabled capabilities for version 3.4
	2024-08-03 23:37:27.670597 I | etcdserver: published {Name:old-k8s-version-820414 ClientURLs:[https://192.168.76.2:2379]} to cluster 6f20f2c4b2fb5f8a
	2024-08-03 23:37:27.670695 I | embed: ready to serve client requests
	2024-08-03 23:37:27.671048 I | embed: ready to serve client requests
	2024-08-03 23:37:27.672485 I | embed: serving client requests on 127.0.0.1:2379
	2024-08-03 23:37:27.716857 I | embed: serving client requests on 192.168.76.2:2379
	2024-08-03 23:37:54.315969 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-08-03 23:37:55.365634 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-08-03 23:38:05.365664 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-08-03 23:38:15.365609 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-08-03 23:38:25.365598 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-08-03 23:38:35.365519 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-08-03 23:38:45.365671 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-08-03 23:38:55.365702 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	
	
	==> etcd [5cbe8058c6723a2d2086539e5d8807f73a64893442fe8bb140d579ee4aececc6] <==
	2024-08-03 23:41:29.327909 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-08-03 23:41:39.327905 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-08-03 23:41:49.327728 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-08-03 23:41:59.328025 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-08-03 23:42:09.327804 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-08-03 23:42:19.327769 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-08-03 23:42:29.327736 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-08-03 23:42:39.328077 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-08-03 23:42:49.327945 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-08-03 23:42:59.327816 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-08-03 23:43:09.327861 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-08-03 23:43:19.327732 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-08-03 23:43:29.327803 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-08-03 23:43:39.327821 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-08-03 23:43:49.327761 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-08-03 23:43:59.327667 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-08-03 23:44:09.327780 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-08-03 23:44:19.327782 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-08-03 23:44:29.327827 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-08-03 23:44:39.327947 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-08-03 23:44:49.327726 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-08-03 23:44:59.327753 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-08-03 23:45:09.327978 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-08-03 23:45:19.327896 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-08-03 23:45:29.327830 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	
	
	==> kernel <==
	 23:45:31 up  8:27,  0 users,  load average: 0.55, 1.86, 2.40
	Linux old-k8s-version-820414 5.15.0-1066-aws #72~20.04.1-Ubuntu SMP Sat Jul 20 07:44:07 UTC 2024 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.4 LTS"
	
	
	==> kindnet [0d22eec037f1c7aab116e4aa78c05368658c4097a76b252c826555e1a25f0d34] <==
	E0803 23:38:06.941161       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "namespaces" in API group "" at the cluster scope
	I0803 23:38:07.545981       1 main.go:295] Handling node with IPs: map[192.168.76.2:{}]
	I0803 23:38:07.546042       1 main.go:299] handling current node
	W0803 23:38:14.219654       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "namespaces" in API group "" at the cluster scope
	E0803 23:38:14.219693       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "namespaces" in API group "" at the cluster scope
	W0803 23:38:17.329677       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: failed to list *v1.Pod: pods is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "pods" in API group "" at the cluster scope
	E0803 23:38:17.329727       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "pods" in API group "" at the cluster scope
	I0803 23:38:17.546240       1 main.go:295] Handling node with IPs: map[192.168.76.2:{}]
	I0803 23:38:17.546278       1 main.go:299] handling current node
	W0803 23:38:17.744356       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: failed to list *v1.NetworkPolicy: networkpolicies.networking.k8s.io is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "networkpolicies" in API group "networking.k8s.io" at the cluster scope
	E0803 23:38:17.744392       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: Failed to watch *v1.NetworkPolicy: failed to list *v1.NetworkPolicy: networkpolicies.networking.k8s.io is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "networkpolicies" in API group "networking.k8s.io" at the cluster scope
	I0803 23:38:27.546105       1 main.go:295] Handling node with IPs: map[192.168.76.2:{}]
	I0803 23:38:27.546142       1 main.go:299] handling current node
	W0803 23:38:33.519981       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: failed to list *v1.Pod: pods is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "pods" in API group "" at the cluster scope
	E0803 23:38:33.520022       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "pods" in API group "" at the cluster scope
	W0803 23:38:34.746722       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "namespaces" in API group "" at the cluster scope
	E0803 23:38:34.746759       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "namespaces" in API group "" at the cluster scope
	I0803 23:38:37.545666       1 main.go:295] Handling node with IPs: map[192.168.76.2:{}]
	I0803 23:38:37.545708       1 main.go:299] handling current node
	W0803 23:38:38.095971       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: failed to list *v1.NetworkPolicy: networkpolicies.networking.k8s.io is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "networkpolicies" in API group "networking.k8s.io" at the cluster scope
	E0803 23:38:38.096203       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: Failed to watch *v1.NetworkPolicy: failed to list *v1.NetworkPolicy: networkpolicies.networking.k8s.io is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "networkpolicies" in API group "networking.k8s.io" at the cluster scope
	I0803 23:38:47.545624       1 main.go:295] Handling node with IPs: map[192.168.76.2:{}]
	I0803 23:38:47.545664       1 main.go:299] handling current node
	I0803 23:38:57.546239       1 main.go:295] Handling node with IPs: map[192.168.76.2:{}]
	I0803 23:38:57.546291       1 main.go:299] handling current node
	
	
	==> kindnet [e8b4079036ee954a62ef0fc54baf2b399cac2f40b9f5d82804b0a0b63287f659] <==
	I0803 23:44:15.846220       1 main.go:299] handling current node
	I0803 23:44:25.845234       1 main.go:295] Handling node with IPs: map[192.168.76.2:{}]
	I0803 23:44:25.845460       1 main.go:299] handling current node
	W0803 23:44:27.404812       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: failed to list *v1.NetworkPolicy: networkpolicies.networking.k8s.io is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "networkpolicies" in API group "networking.k8s.io" at the cluster scope
	E0803 23:44:27.404857       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: Failed to watch *v1.NetworkPolicy: failed to list *v1.NetworkPolicy: networkpolicies.networking.k8s.io is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "networkpolicies" in API group "networking.k8s.io" at the cluster scope
	I0803 23:44:35.845472       1 main.go:295] Handling node with IPs: map[192.168.76.2:{}]
	I0803 23:44:35.845509       1 main.go:299] handling current node
	I0803 23:44:45.845534       1 main.go:295] Handling node with IPs: map[192.168.76.2:{}]
	I0803 23:44:45.845570       1 main.go:299] handling current node
	W0803 23:44:46.319262       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "namespaces" in API group "" at the cluster scope
	E0803 23:44:46.319300       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "namespaces" in API group "" at the cluster scope
	W0803 23:44:51.539721       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: failed to list *v1.Pod: pods is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "pods" in API group "" at the cluster scope
	E0803 23:44:51.539817       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "pods" in API group "" at the cluster scope
	I0803 23:44:55.846013       1 main.go:295] Handling node with IPs: map[192.168.76.2:{}]
	I0803 23:44:55.846059       1 main.go:299] handling current node
	I0803 23:45:05.846227       1 main.go:295] Handling node with IPs: map[192.168.76.2:{}]
	I0803 23:45:05.846265       1 main.go:299] handling current node
	I0803 23:45:15.845233       1 main.go:295] Handling node with IPs: map[192.168.76.2:{}]
	I0803 23:45:15.845267       1 main.go:299] handling current node
	W0803 23:45:24.656615       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: failed to list *v1.NetworkPolicy: networkpolicies.networking.k8s.io is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "networkpolicies" in API group "networking.k8s.io" at the cluster scope
	E0803 23:45:24.656664       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: Failed to watch *v1.NetworkPolicy: failed to list *v1.NetworkPolicy: networkpolicies.networking.k8s.io is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "networkpolicies" in API group "networking.k8s.io" at the cluster scope
	I0803 23:45:25.845435       1 main.go:295] Handling node with IPs: map[192.168.76.2:{}]
	I0803 23:45:25.845474       1 main.go:299] handling current node
	W0803 23:45:31.493549       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: failed to list *v1.Pod: pods is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "pods" in API group "" at the cluster scope
	E0803 23:45:31.493591       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "pods" in API group "" at the cluster scope
	
	
	==> kube-apiserver [2dfe55d1797db163cd9199214e216cbd925e66788ac3213f4189fd6d49eb137d] <==
	I0803 23:41:50.545217       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0803 23:41:50.545301       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0803 23:42:34.915732       1 client.go:360] parsed scheme: "passthrough"
	I0803 23:42:34.915962       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0803 23:42:34.915975       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	W0803 23:42:44.485292       1 handler_proxy.go:102] no RequestInfo found in the context
	E0803 23:42:44.485525       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0803 23:42:44.485545       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0803 23:43:14.156486       1 client.go:360] parsed scheme: "passthrough"
	I0803 23:43:14.156693       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0803 23:43:14.156712       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0803 23:43:45.474431       1 client.go:360] parsed scheme: "passthrough"
	I0803 23:43:45.474475       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0803 23:43:45.474512       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0803 23:44:24.729158       1 client.go:360] parsed scheme: "passthrough"
	I0803 23:44:24.729206       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0803 23:44:24.729214       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	W0803 23:44:41.619054       1 handler_proxy.go:102] no RequestInfo found in the context
	E0803 23:44:41.619129       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0803 23:44:41.619137       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0803 23:45:08.235681       1 client.go:360] parsed scheme: "passthrough"
	I0803 23:45:08.235720       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0803 23:45:08.235728       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	
	
	==> kube-apiserver [fb5f0343694ff099d26f408ccb02c199901b58111f3e21cf3de3c23a210d7b6d] <==
	I0803 23:37:34.770000       1 shared_informer.go:247] Caches are synced for crd-autoregister 
	I0803 23:37:34.853296       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0803 23:37:34.860581       1 controller.go:606] quota admission added evaluator for: namespaces
	I0803 23:37:35.529649       1 controller.go:132] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
	I0803 23:37:35.529685       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0803 23:37:35.534801       1 storage_scheduling.go:132] created PriorityClass system-node-critical with value 2000001000
	I0803 23:37:35.538555       1 storage_scheduling.go:132] created PriorityClass system-cluster-critical with value 2000000000
	I0803 23:37:35.538577       1 storage_scheduling.go:148] all system priority classes are created successfully or already exist.
	I0803 23:37:36.057950       1 controller.go:606] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0803 23:37:36.114755       1 controller.go:606] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	W0803 23:37:36.234981       1 lease.go:233] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I0803 23:37:36.236259       1 controller.go:606] quota admission added evaluator for: endpoints
	I0803 23:37:36.240525       1 controller.go:606] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0803 23:37:37.161011       1 controller.go:606] quota admission added evaluator for: serviceaccounts
	I0803 23:37:37.778645       1 controller.go:606] quota admission added evaluator for: deployments.apps
	I0803 23:37:37.867538       1 controller.go:606] quota admission added evaluator for: daemonsets.apps
	I0803 23:37:46.287194       1 controller.go:606] quota admission added evaluator for: leases.coordination.k8s.io
	I0803 23:37:53.770845       1 controller.go:606] quota admission added evaluator for: controllerrevisions.apps
	I0803 23:37:53.948596       1 controller.go:606] quota admission added evaluator for: replicasets.apps
	I0803 23:38:01.779067       1 client.go:360] parsed scheme: "passthrough"
	I0803 23:38:01.780362       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0803 23:38:01.780404       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0803 23:38:32.232105       1 client.go:360] parsed scheme: "passthrough"
	I0803 23:38:32.232150       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0803 23:38:32.232159       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	
	
	==> kube-controller-manager [decd46f1eb153a03a912739b52c5220cab3bc0a68a41ce2f501a19b1547d030a] <==
	W0803 23:41:04.655760       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0803 23:41:30.722148       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0803 23:41:36.306143       1 request.go:655] Throttling request took 1.048265773s, request: GET:https://192.168.76.2:8443/apis/extensions/v1beta1?timeout=32s
	W0803 23:41:37.157664       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0803 23:42:01.224162       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0803 23:42:08.808308       1 request.go:655] Throttling request took 1.048373043s, request: GET:https://192.168.76.2:8443/apis/extensions/v1beta1?timeout=32s
	W0803 23:42:09.659713       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0803 23:42:31.726014       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0803 23:42:41.310161       1 request.go:655] Throttling request took 1.048208091s, request: GET:https://192.168.76.2:8443/apis/extensions/v1beta1?timeout=32s
	W0803 23:42:42.162129       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0803 23:43:02.227767       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0803 23:43:13.814395       1 request.go:655] Throttling request took 1.048309236s, request: GET:https://192.168.76.2:8443/apis/extensions/v1beta1?timeout=32s
	W0803 23:43:14.665794       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0803 23:43:32.729579       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0803 23:43:46.316357       1 request.go:655] Throttling request took 1.048289403s, request: GET:https://192.168.76.2:8443/apis/events.k8s.io/v1beta1?timeout=32s
	W0803 23:43:47.167741       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0803 23:44:03.231469       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0803 23:44:18.818323       1 request.go:655] Throttling request took 1.048394895s, request: GET:https://192.168.76.2:8443/apis/certificates.k8s.io/v1?timeout=32s
	W0803 23:44:19.669882       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0803 23:44:33.733300       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0803 23:44:51.320280       1 request.go:655] Throttling request took 1.048358691s, request: GET:https://192.168.76.2:8443/apis/extensions/v1beta1?timeout=32s
	W0803 23:44:52.171774       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0803 23:45:04.235202       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0803 23:45:23.822163       1 request.go:655] Throttling request took 1.048438866s, request: GET:https://192.168.76.2:8443/apis/autoscaling/v2beta2?timeout=32s
	W0803 23:45:24.673416       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	
	
	==> kube-controller-manager [e15be75c9111b317b4e56baaf7f93bc458c9080032ce09cbdd0e3fa77a17cb5e] <==
	I0803 23:37:53.881487       1 range_allocator.go:373] Set node old-k8s-version-820414 PodCIDR to [10.244.0.0/24]
	I0803 23:37:53.891398       1 shared_informer.go:247] Caches are synced for persistent volume 
	I0803 23:37:53.892840       1 shared_informer.go:247] Caches are synced for resource quota 
	I0803 23:37:53.894944       1 shared_informer.go:247] Caches are synced for ReplicaSet 
	I0803 23:37:53.900825       1 shared_informer.go:247] Caches are synced for disruption 
	I0803 23:37:53.900848       1 disruption.go:339] Sending events to api server.
	E0803 23:37:53.905188       1 clusterroleaggregation_controller.go:181] view failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "view": the object has been modified; please apply your changes to the latest version and try again
	I0803 23:37:53.905442       1 shared_informer.go:247] Caches are synced for resource quota 
	I0803 23:37:53.906992       1 event.go:291] "Event occurred" object="kube-system/kindnet" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-gwx4j"
	I0803 23:37:53.937540       1 shared_informer.go:247] Caches are synced for deployment 
	E0803 23:37:53.939292       1 clusterroleaggregation_controller.go:181] admin failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "admin": the object has been modified; please apply your changes to the latest version and try again
	I0803 23:37:53.941724       1 event.go:291] "Event occurred" object="kube-system/kube-proxy" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-rgk96"
	I0803 23:37:53.989118       1 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-74ff55c5b to 2"
	I0803 23:37:54.017496       1 event.go:291] "Event occurred" object="kube-system/coredns-74ff55c5b" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-74ff55c5b-xng8r"
	I0803 23:37:54.055721       1 event.go:291] "Event occurred" object="kube-system/coredns-74ff55c5b" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-74ff55c5b-mq76w"
	E0803 23:37:54.065071       1 daemon_controller.go:320] kube-system/kube-proxy failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kube-proxy", GenerateName:"", Namespace:"kube-system", SelfLink:"", UID:"687efdd9-c112-4c99-a123-b957e89b6fcc", ResourceVersion:"267", Generation:1, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63858325057, loc:(*time.Location)(0x632eb80)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string{"deprecated.daemonset.template.generation":"1"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kubeadm", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0x4001672f60), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0x4001672f80)}}}, Spec:v1.DaemonSetSpec{Selector:(*v1.
LabelSelector)(0x4001672fa0), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"kube-proxy", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.Gl
usterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(0x4001698680), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"xtables-lock", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0x4001672
fc0), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeS
ource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"lib-modules", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0x4001672fe0), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil),
AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"kube-proxy", Image:"k8s.gcr.io/kube-proxy:v1.20.0", Command:[]string{"/usr/local/bin/kube-proxy", "--config=/var/lib/kube-proxy/config.conf", "--hostname-override=$(NODE_NAME)"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"NODE_NAME", Value:"", ValueFrom:(*v1.EnvVarSource)(0x4001673020)}}, Resources:v1.R
esourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"kube-proxy", ReadOnly:false, MountPath:"/var/lib/kube-proxy", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"xtables-lock", ReadOnly:false, MountPath:"/run/xtables.lock", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"lib-modules", ReadOnly:true, MountPath:"/lib/modules", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(0x4001646960), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPo
licy:"Always", TerminationGracePeriodSeconds:(*int64)(0x4000bc2a98), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string{"kubernetes.io/os":"linux"}, ServiceAccountName:"kube-proxy", DeprecatedServiceAccount:"kube-proxy", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0x4000affc70), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"CriticalAddonsOnly", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)(nil)}, v1.Toleration{Key:"", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)(nil)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"system-node-critical", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), Runtime
ClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:"RollingUpdate", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0x40002072d8)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0x4000bc2ae8)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:0, NumberMisscheduled:0, DesiredNumberScheduled:0, NumberReady:0, ObservedGeneration:0, UpdatedNumberScheduled:0, NumberAvailable:0, NumberUnavailable:0, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps "kube-proxy": the object has been modified; please apply your changes to the latest version and try again
	I0803 23:37:54.461149       1 shared_informer.go:240] Waiting for caches to sync for garbage collector
	I0803 23:37:54.538794       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0803 23:37:54.538824       1 garbagecollector.go:151] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0803 23:37:54.561399       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0803 23:37:55.501048       1 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-74ff55c5b to 1"
	I0803 23:37:55.518167       1 event.go:291] "Event occurred" object="kube-system/coredns-74ff55c5b" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-74ff55c5b-mq76w"
	I0803 23:37:58.652528       1 node_lifecycle_controller.go:1222] Controller detected that some Nodes are Ready. Exiting master disruption mode.
	I0803 23:39:01.403762       1 event.go:291] "Event occurred" object="kube-system/metrics-server" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set metrics-server-9975d5f86 to 1"
	E0803 23:39:01.509068       1 clusterroleaggregation_controller.go:181] edit failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "edit": the object has been modified; please apply your changes to the latest version and try again
	
	
	==> kube-proxy [9d8357e9b98e5a01f551902057365a6f43c698ef3ffda88640f432be5ca342dc] <==
	I0803 23:37:56.221465       1 node.go:172] Successfully retrieved node IP: 192.168.76.2
	I0803 23:37:56.222342       1 server_others.go:142] kube-proxy node IP is an IPv4 address (192.168.76.2), assume IPv4 operation
	W0803 23:37:56.258218       1 server_others.go:578] Unknown proxy mode "", assuming iptables proxy
	I0803 23:37:56.258535       1 server_others.go:185] Using iptables Proxier.
	I0803 23:37:56.258933       1 server.go:650] Version: v1.20.0
	I0803 23:37:56.259801       1 config.go:315] Starting service config controller
	I0803 23:37:56.259962       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0803 23:37:56.259994       1 config.go:224] Starting endpoint slice config controller
	I0803 23:37:56.259998       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	I0803 23:37:56.360088       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	I0803 23:37:56.360151       1 shared_informer.go:247] Caches are synced for service config 
	
	
	==> kube-proxy [aceae907dc5dd357ef34d25fba3386bee6214298dda34f31bb744dd99aaf02c5] <==
	I0803 23:39:44.318647       1 node.go:172] Successfully retrieved node IP: 192.168.76.2
	I0803 23:39:44.319004       1 server_others.go:142] kube-proxy node IP is an IPv4 address (192.168.76.2), assume IPv4 operation
	W0803 23:39:44.417459       1 server_others.go:578] Unknown proxy mode "", assuming iptables proxy
	I0803 23:39:44.417835       1 server_others.go:185] Using iptables Proxier.
	I0803 23:39:44.418477       1 server.go:650] Version: v1.20.0
	I0803 23:39:44.419274       1 config.go:315] Starting service config controller
	I0803 23:39:44.419435       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0803 23:39:44.419535       1 config.go:224] Starting endpoint slice config controller
	I0803 23:39:44.419617       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	I0803 23:39:44.520087       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	I0803 23:39:44.520315       1 shared_informer.go:247] Caches are synced for service config 
	
	
	==> kube-scheduler [1cf5b72b69cb46a2f0ebd44d87f295610e0a42f9bfb29e2fb3e863c1645de1b9] <==
	I0803 23:37:29.067175       1 serving.go:331] Generated self-signed cert in-memory
	W0803 23:37:34.662180       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0803 23:37:34.662212       1 authentication.go:332] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0803 23:37:34.662221       1 authentication.go:333] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0803 23:37:34.662227       1 authentication.go:334] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0803 23:37:34.748097       1 secure_serving.go:197] Serving securely on 127.0.0.1:10259
	I0803 23:37:34.750847       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0803 23:37:34.750872       1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0803 23:37:34.750889       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	E0803 23:37:34.797005       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0803 23:37:34.797340       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0803 23:37:34.797712       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0803 23:37:34.798485       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0803 23:37:34.800735       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0803 23:37:34.801016       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0803 23:37:34.801226       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0803 23:37:34.801458       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0803 23:37:34.801813       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0803 23:37:34.802789       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0803 23:37:34.802989       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0803 23:37:34.803086       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0803 23:37:35.746888       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0803 23:37:35.932978       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0803 23:37:38.350967       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	
	==> kube-scheduler [9bf21e77a35b82c0eff0d1588e963d3c44c43c2c26455b9d13e89d93c07f4a7a] <==
	I0803 23:39:32.931823       1 serving.go:331] Generated self-signed cert in-memory
	W0803 23:39:40.431552       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0803 23:39:40.431587       1 authentication.go:332] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0803 23:39:40.431595       1 authentication.go:333] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0803 23:39:40.431602       1 authentication.go:334] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0803 23:39:40.650656       1 secure_serving.go:197] Serving securely on 127.0.0.1:10259
	I0803 23:39:40.650741       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0803 23:39:40.650747       1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0803 23:39:40.650770       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	I0803 23:39:40.956938       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	
	==> kubelet <==
	Aug 03 23:43:58 old-k8s-version-820414 kubelet[663]: E0803 23:43:58.444353     663 pod_workers.go:191] Error syncing pod fb610adc-a764-488d-83dd-61cc38a45f5f ("metrics-server-9975d5f86-wm57q_kube-system(fb610adc-a764-488d-83dd-61cc38a45f5f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Aug 03 23:44:08 old-k8s-version-820414 kubelet[663]: I0803 23:44:08.443737     663 scope.go:95] [topologymanager] RemoveContainer - Container ID: e9f478937b2bff489b7ea0b9e7c88f8afe428985a501ab678646efdde70637b1
	Aug 03 23:44:08 old-k8s-version-820414 kubelet[663]: E0803 23:44:08.444136     663 pod_workers.go:191] Error syncing pod ffd039c1-683a-48d6-aaaa-75fd4714221a ("dashboard-metrics-scraper-8d5bb5db8-lfzsx_kubernetes-dashboard(ffd039c1-683a-48d6-aaaa-75fd4714221a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-lfzsx_kubernetes-dashboard(ffd039c1-683a-48d6-aaaa-75fd4714221a)"
	Aug 03 23:44:12 old-k8s-version-820414 kubelet[663]: E0803 23:44:12.444406     663 pod_workers.go:191] Error syncing pod fb610adc-a764-488d-83dd-61cc38a45f5f ("metrics-server-9975d5f86-wm57q_kube-system(fb610adc-a764-488d-83dd-61cc38a45f5f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Aug 03 23:44:22 old-k8s-version-820414 kubelet[663]: I0803 23:44:22.443674     663 scope.go:95] [topologymanager] RemoveContainer - Container ID: e9f478937b2bff489b7ea0b9e7c88f8afe428985a501ab678646efdde70637b1
	Aug 03 23:44:22 old-k8s-version-820414 kubelet[663]: E0803 23:44:22.444030     663 pod_workers.go:191] Error syncing pod ffd039c1-683a-48d6-aaaa-75fd4714221a ("dashboard-metrics-scraper-8d5bb5db8-lfzsx_kubernetes-dashboard(ffd039c1-683a-48d6-aaaa-75fd4714221a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-lfzsx_kubernetes-dashboard(ffd039c1-683a-48d6-aaaa-75fd4714221a)"
	Aug 03 23:44:23 old-k8s-version-820414 kubelet[663]: E0803 23:44:23.449169     663 pod_workers.go:191] Error syncing pod fb610adc-a764-488d-83dd-61cc38a45f5f ("metrics-server-9975d5f86-wm57q_kube-system(fb610adc-a764-488d-83dd-61cc38a45f5f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Aug 03 23:44:35 old-k8s-version-820414 kubelet[663]: E0803 23:44:35.445923     663 pod_workers.go:191] Error syncing pod fb610adc-a764-488d-83dd-61cc38a45f5f ("metrics-server-9975d5f86-wm57q_kube-system(fb610adc-a764-488d-83dd-61cc38a45f5f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Aug 03 23:44:37 old-k8s-version-820414 kubelet[663]: I0803 23:44:37.443821     663 scope.go:95] [topologymanager] RemoveContainer - Container ID: e9f478937b2bff489b7ea0b9e7c88f8afe428985a501ab678646efdde70637b1
	Aug 03 23:44:37 old-k8s-version-820414 kubelet[663]: E0803 23:44:37.444223     663 pod_workers.go:191] Error syncing pod ffd039c1-683a-48d6-aaaa-75fd4714221a ("dashboard-metrics-scraper-8d5bb5db8-lfzsx_kubernetes-dashboard(ffd039c1-683a-48d6-aaaa-75fd4714221a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-lfzsx_kubernetes-dashboard(ffd039c1-683a-48d6-aaaa-75fd4714221a)"
	Aug 03 23:44:50 old-k8s-version-820414 kubelet[663]: I0803 23:44:50.443836     663 scope.go:95] [topologymanager] RemoveContainer - Container ID: e9f478937b2bff489b7ea0b9e7c88f8afe428985a501ab678646efdde70637b1
	Aug 03 23:44:50 old-k8s-version-820414 kubelet[663]: E0803 23:44:50.444155     663 pod_workers.go:191] Error syncing pod ffd039c1-683a-48d6-aaaa-75fd4714221a ("dashboard-metrics-scraper-8d5bb5db8-lfzsx_kubernetes-dashboard(ffd039c1-683a-48d6-aaaa-75fd4714221a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-lfzsx_kubernetes-dashboard(ffd039c1-683a-48d6-aaaa-75fd4714221a)"
	Aug 03 23:44:50 old-k8s-version-820414 kubelet[663]: E0803 23:44:50.445023     663 pod_workers.go:191] Error syncing pod fb610adc-a764-488d-83dd-61cc38a45f5f ("metrics-server-9975d5f86-wm57q_kube-system(fb610adc-a764-488d-83dd-61cc38a45f5f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Aug 03 23:45:02 old-k8s-version-820414 kubelet[663]: E0803 23:45:02.444334     663 pod_workers.go:191] Error syncing pod fb610adc-a764-488d-83dd-61cc38a45f5f ("metrics-server-9975d5f86-wm57q_kube-system(fb610adc-a764-488d-83dd-61cc38a45f5f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Aug 03 23:45:04 old-k8s-version-820414 kubelet[663]: I0803 23:45:04.443724     663 scope.go:95] [topologymanager] RemoveContainer - Container ID: e9f478937b2bff489b7ea0b9e7c88f8afe428985a501ab678646efdde70637b1
	Aug 03 23:45:04 old-k8s-version-820414 kubelet[663]: E0803 23:45:04.444530     663 pod_workers.go:191] Error syncing pod ffd039c1-683a-48d6-aaaa-75fd4714221a ("dashboard-metrics-scraper-8d5bb5db8-lfzsx_kubernetes-dashboard(ffd039c1-683a-48d6-aaaa-75fd4714221a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-lfzsx_kubernetes-dashboard(ffd039c1-683a-48d6-aaaa-75fd4714221a)"
	Aug 03 23:45:15 old-k8s-version-820414 kubelet[663]: E0803 23:45:15.444459     663 pod_workers.go:191] Error syncing pod fb610adc-a764-488d-83dd-61cc38a45f5f ("metrics-server-9975d5f86-wm57q_kube-system(fb610adc-a764-488d-83dd-61cc38a45f5f)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Aug 03 23:45:17 old-k8s-version-820414 kubelet[663]: I0803 23:45:17.443748     663 scope.go:95] [topologymanager] RemoveContainer - Container ID: e9f478937b2bff489b7ea0b9e7c88f8afe428985a501ab678646efdde70637b1
	Aug 03 23:45:17 old-k8s-version-820414 kubelet[663]: E0803 23:45:17.444534     663 pod_workers.go:191] Error syncing pod ffd039c1-683a-48d6-aaaa-75fd4714221a ("dashboard-metrics-scraper-8d5bb5db8-lfzsx_kubernetes-dashboard(ffd039c1-683a-48d6-aaaa-75fd4714221a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-lfzsx_kubernetes-dashboard(ffd039c1-683a-48d6-aaaa-75fd4714221a)"
	Aug 03 23:45:27 old-k8s-version-820414 kubelet[663]: E0803 23:45:27.461148     663 remote_image.go:113] PullImage "fake.domain/registry.k8s.io/echoserver:1.4" from image service failed: rpc error: code = Unknown desc = failed to pull and unpack image "fake.domain/registry.k8s.io/echoserver:1.4": failed to resolve reference "fake.domain/registry.k8s.io/echoserver:1.4": failed to do request: Head "https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host
	Aug 03 23:45:27 old-k8s-version-820414 kubelet[663]: E0803 23:45:27.461239     663 kuberuntime_image.go:51] Pull image "fake.domain/registry.k8s.io/echoserver:1.4" failed: rpc error: code = Unknown desc = failed to pull and unpack image "fake.domain/registry.k8s.io/echoserver:1.4": failed to resolve reference "fake.domain/registry.k8s.io/echoserver:1.4": failed to do request: Head "https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host
	Aug 03 23:45:27 old-k8s-version-820414 kubelet[663]: E0803 23:45:27.461739     663 kuberuntime_manager.go:829] container &Container{Name:metrics-server,Image:fake.domain/registry.k8s.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=60s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{209715200 0} {<nil>}  BinarySI},},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:metrics-server-token-nxdwh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:&Probe{Handler:Handler{Exec
:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,},ReadinessProbe:&Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod metrics-server-9975d5f86-wm57q_kube-system(fb610ad
c-a764-488d-83dd-61cc38a45f5f): ErrImagePull: rpc error: code = Unknown desc = failed to pull and unpack image "fake.domain/registry.k8s.io/echoserver:1.4": failed to resolve reference "fake.domain/registry.k8s.io/echoserver:1.4": failed to do request: Head "https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host
	Aug 03 23:45:27 old-k8s-version-820414 kubelet[663]: E0803 23:45:27.461785     663 pod_workers.go:191] Error syncing pod fb610adc-a764-488d-83dd-61cc38a45f5f ("metrics-server-9975d5f86-wm57q_kube-system(fb610adc-a764-488d-83dd-61cc38a45f5f)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	Aug 03 23:45:30 old-k8s-version-820414 kubelet[663]: I0803 23:45:30.443733     663 scope.go:95] [topologymanager] RemoveContainer - Container ID: e9f478937b2bff489b7ea0b9e7c88f8afe428985a501ab678646efdde70637b1
	Aug 03 23:45:30 old-k8s-version-820414 kubelet[663]: E0803 23:45:30.444218     663 pod_workers.go:191] Error syncing pod ffd039c1-683a-48d6-aaaa-75fd4714221a ("dashboard-metrics-scraper-8d5bb5db8-lfzsx_kubernetes-dashboard(ffd039c1-683a-48d6-aaaa-75fd4714221a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-lfzsx_kubernetes-dashboard(ffd039c1-683a-48d6-aaaa-75fd4714221a)"
	
	
	==> kubernetes-dashboard [32cdc2b56b59662ee956a5208730f2f361969d0adbd1439929e7e6c099a03838] <==
	2024/08/03 23:40:05 Using namespace: kubernetes-dashboard
	2024/08/03 23:40:05 Using in-cluster config to connect to apiserver
	2024/08/03 23:40:05 Using secret token for csrf signing
	2024/08/03 23:40:05 Initializing csrf token from kubernetes-dashboard-csrf secret
	2024/08/03 23:40:05 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2024/08/03 23:40:05 Successful initial request to the apiserver, version: v1.20.0
	2024/08/03 23:40:05 Generating JWE encryption key
	2024/08/03 23:40:05 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2024/08/03 23:40:05 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2024/08/03 23:40:06 Initializing JWE encryption key from synchronized object
	2024/08/03 23:40:06 Creating in-cluster Sidecar client
	2024/08/03 23:40:06 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/08/03 23:40:06 Serving insecurely on HTTP port: 9090
	2024/08/03 23:40:36 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/08/03 23:41:06 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/08/03 23:41:36 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/08/03 23:42:06 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/08/03 23:42:36 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/08/03 23:43:06 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/08/03 23:43:36 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/08/03 23:44:06 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/08/03 23:44:36 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/08/03 23:45:06 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/08/03 23:40:05 Starting overwatch
	
	
	==> storage-provisioner [9d49e481b2b6243d5e9de51f4fcc1d6d8cb677348b5fd9254200807f07c7297e] <==
	I0803 23:39:44.104715       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0803 23:40:14.107752       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [fe656919428c126fa2c0d30d5ab782825c967bc723dfa769843ecf5bba788898] <==
	I0803 23:40:26.665766       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0803 23:40:26.714249       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0803 23:40:26.714642       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0803 23:40:44.207107       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0803 23:40:44.207339       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-820414_3839ae29-6bce-49ce-8c93-a7cb3047f730!
	I0803 23:40:44.208589       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"b1e39756-28e4-4d4b-8c03-26743d31ea97", APIVersion:"v1", ResourceVersion:"814", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-820414_3839ae29-6bce-49ce-8c93-a7cb3047f730 became leader
	I0803 23:40:44.308151       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-820414_3839ae29-6bce-49ce-8c93-a7cb3047f730!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-820414 -n old-k8s-version-820414
helpers_test.go:261: (dbg) Run:  kubectl --context old-k8s-version-820414 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-9975d5f86-wm57q
helpers_test.go:274: ======> post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context old-k8s-version-820414 describe pod metrics-server-9975d5f86-wm57q
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context old-k8s-version-820414 describe pod metrics-server-9975d5f86-wm57q: exit status 1 (246.099647ms)

                                                
                                                
** stderr ** 
	E0803 23:45:33.585545 1402396 memcache.go:287] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	E0803 23:45:33.606309 1402396 memcache.go:121] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	E0803 23:45:33.610960 1402396 memcache.go:121] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	E0803 23:45:33.613223 1402396 memcache.go:121] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	E0803 23:45:33.626991 1402396 memcache.go:121] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	E0803 23:45:33.629984 1402396 memcache.go:121] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	Error from server (NotFound): pods "metrics-server-9975d5f86-wm57q" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context old-k8s-version-820414 describe pod metrics-server-9975d5f86-wm57q: exit status 1
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (379.53s)

                                                
                                    

Test pass (303/336)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 9.48
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.09
9 TestDownloadOnly/v1.20.0/DeleteAll 0.21
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.13
12 TestDownloadOnly/v1.30.3/json-events 6.88
13 TestDownloadOnly/v1.30.3/preload-exists 0
17 TestDownloadOnly/v1.30.3/LogsDuration 0.07
18 TestDownloadOnly/v1.30.3/DeleteAll 0.19
19 TestDownloadOnly/v1.30.3/DeleteAlwaysSucceeds 0.14
21 TestDownloadOnly/v1.31.0-rc.0/json-events 10.33
22 TestDownloadOnly/v1.31.0-rc.0/preload-exists 0
26 TestDownloadOnly/v1.31.0-rc.0/LogsDuration 0.08
27 TestDownloadOnly/v1.31.0-rc.0/DeleteAll 0.2
28 TestDownloadOnly/v1.31.0-rc.0/DeleteAlwaysSucceeds 0.12
30 TestBinaryMirror 0.56
34 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.08
35 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.09
36 TestAddons/Setup 221.93
40 TestAddons/serial/GCPAuth/Namespaces 0.18
42 TestAddons/parallel/Registry 16.35
43 TestAddons/parallel/Ingress 20.39
44 TestAddons/parallel/InspektorGadget 11.83
45 TestAddons/parallel/MetricsServer 5.97
48 TestAddons/parallel/CSI 54.98
49 TestAddons/parallel/Headlamp 16.82
50 TestAddons/parallel/CloudSpanner 6.59
51 TestAddons/parallel/LocalPath 52.53
52 TestAddons/parallel/NvidiaDevicePlugin 5.53
53 TestAddons/parallel/Yakd 11.87
54 TestAddons/StoppedEnableDisable 12.31
55 TestCertOptions 40.93
56 TestCertExpiration 228.96
58 TestForceSystemdFlag 42.6
59 TestForceSystemdEnv 47.48
60 TestDockerEnvContainerd 45.48
65 TestErrorSpam/setup 28.6
66 TestErrorSpam/start 0.72
67 TestErrorSpam/status 0.98
68 TestErrorSpam/pause 1.64
69 TestErrorSpam/unpause 1.77
70 TestErrorSpam/stop 1.41
73 TestFunctional/serial/CopySyncFile 0
74 TestFunctional/serial/StartWithProxy 69.68
75 TestFunctional/serial/AuditLog 0
76 TestFunctional/serial/SoftStart 6.31
77 TestFunctional/serial/KubeContext 0.06
78 TestFunctional/serial/KubectlGetPods 0.11
81 TestFunctional/serial/CacheCmd/cache/add_remote 4.47
82 TestFunctional/serial/CacheCmd/cache/add_local 1.51
83 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.05
84 TestFunctional/serial/CacheCmd/cache/list 0.06
85 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.3
86 TestFunctional/serial/CacheCmd/cache/cache_reload 2.12
87 TestFunctional/serial/CacheCmd/cache/delete 0.12
88 TestFunctional/serial/MinikubeKubectlCmd 0.14
89 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.13
90 TestFunctional/serial/ExtraConfig 46.57
91 TestFunctional/serial/ComponentHealth 0.09
92 TestFunctional/serial/LogsCmd 1.71
93 TestFunctional/serial/LogsFileCmd 1.77
94 TestFunctional/serial/InvalidService 4.46
96 TestFunctional/parallel/ConfigCmd 0.43
97 TestFunctional/parallel/DashboardCmd 7.31
98 TestFunctional/parallel/DryRun 0.6
99 TestFunctional/parallel/InternationalLanguage 0.25
100 TestFunctional/parallel/StatusCmd 1.31
104 TestFunctional/parallel/ServiceCmdConnect 9.72
105 TestFunctional/parallel/AddonsCmd 0.2
106 TestFunctional/parallel/PersistentVolumeClaim 25.23
108 TestFunctional/parallel/SSHCmd 0.65
109 TestFunctional/parallel/CpCmd 2.2
111 TestFunctional/parallel/FileSync 0.33
112 TestFunctional/parallel/CertSync 2.25
116 TestFunctional/parallel/NodeLabels 0.09
118 TestFunctional/parallel/NonActiveRuntimeDisabled 0.71
120 TestFunctional/parallel/License 0.25
122 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.64
123 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
125 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 8.45
126 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.1
127 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
131 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
132 TestFunctional/parallel/ServiceCmd/DeployApp 6.23
133 TestFunctional/parallel/ServiceCmd/List 0.49
134 TestFunctional/parallel/ServiceCmd/JSONOutput 0.53
135 TestFunctional/parallel/ProfileCmd/profile_not_create 0.5
136 TestFunctional/parallel/ServiceCmd/HTTPS 0.48
137 TestFunctional/parallel/ProfileCmd/profile_list 0.62
138 TestFunctional/parallel/ServiceCmd/Format 0.63
139 TestFunctional/parallel/ProfileCmd/profile_json_output 0.6
140 TestFunctional/parallel/ServiceCmd/URL 0.56
141 TestFunctional/parallel/MountCmd/any-port 8.41
142 TestFunctional/parallel/MountCmd/specific-port 2
143 TestFunctional/parallel/Version/short 0.07
144 TestFunctional/parallel/Version/components 1.32
145 TestFunctional/parallel/MountCmd/VerifyCleanup 2.19
146 TestFunctional/parallel/ImageCommands/ImageListShort 0.29
147 TestFunctional/parallel/ImageCommands/ImageListTable 0.3
148 TestFunctional/parallel/ImageCommands/ImageListJson 0.32
149 TestFunctional/parallel/ImageCommands/ImageListYaml 0.3
150 TestFunctional/parallel/ImageCommands/ImageBuild 2.88
151 TestFunctional/parallel/ImageCommands/Setup 0.8
152 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.73
153 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1.59
154 TestFunctional/parallel/UpdateContextCmd/no_changes 0.19
155 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.22
156 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.2
157 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.68
158 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.88
159 TestFunctional/parallel/ImageCommands/ImageRemove 0.51
160 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.66
161 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.75
162 TestFunctional/delete_echo-server_images 0.04
163 TestFunctional/delete_my-image_image 0.02
164 TestFunctional/delete_minikube_cached_images 0.02
168 TestMultiControlPlane/serial/StartCluster 122.79
169 TestMultiControlPlane/serial/DeployApp 31.32
170 TestMultiControlPlane/serial/PingHostFromPods 1.58
171 TestMultiControlPlane/serial/AddWorkerNode 22.67
172 TestMultiControlPlane/serial/NodeLabels 0.12
173 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.78
174 TestMultiControlPlane/serial/CopyFile 19.01
175 TestMultiControlPlane/serial/StopSecondaryNode 12.91
176 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.6
177 TestMultiControlPlane/serial/RestartSecondaryNode 18.59
178 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.79
179 TestMultiControlPlane/serial/RestartClusterKeepsNodes 138.29
180 TestMultiControlPlane/serial/DeleteSecondaryNode 11.44
181 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.54
182 TestMultiControlPlane/serial/StopCluster 36.04
183 TestMultiControlPlane/serial/RestartCluster 73.58
184 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.54
185 TestMultiControlPlane/serial/AddSecondaryNode 44.1
186 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.75
190 TestJSONOutput/start/Command 69.83
191 TestJSONOutput/start/Audit 0
193 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
194 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
196 TestJSONOutput/pause/Command 0.73
197 TestJSONOutput/pause/Audit 0
199 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
200 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
202 TestJSONOutput/unpause/Command 0.64
203 TestJSONOutput/unpause/Audit 0
205 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
206 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
208 TestJSONOutput/stop/Command 5.78
209 TestJSONOutput/stop/Audit 0
211 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
212 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
213 TestErrorJSONOutput 0.22
215 TestKicCustomNetwork/create_custom_network 38.47
216 TestKicCustomNetwork/use_default_bridge_network 33.81
217 TestKicExistingNetwork 32.1
218 TestKicCustomSubnet 37.53
219 TestKicStaticIP 33.44
220 TestMainNoArgs 0.05
221 TestMinikubeProfile 67.57
224 TestMountStart/serial/StartWithMountFirst 6.43
225 TestMountStart/serial/VerifyMountFirst 0.25
226 TestMountStart/serial/StartWithMountSecond 6.19
227 TestMountStart/serial/VerifyMountSecond 0.26
228 TestMountStart/serial/DeleteFirst 1.57
229 TestMountStart/serial/VerifyMountPostDelete 0.26
230 TestMountStart/serial/Stop 1.2
231 TestMountStart/serial/RestartStopped 7.41
232 TestMountStart/serial/VerifyMountPostStop 0.25
235 TestMultiNode/serial/FreshStart2Nodes 78.65
236 TestMultiNode/serial/DeployApp2Nodes 18.01
237 TestMultiNode/serial/PingHostFrom2Pods 0.96
238 TestMultiNode/serial/AddNode 15.94
239 TestMultiNode/serial/MultiNodeLabels 0.1
240 TestMultiNode/serial/ProfileList 0.32
241 TestMultiNode/serial/CopyFile 10.34
242 TestMultiNode/serial/StopNode 2.26
243 TestMultiNode/serial/StartAfterStop 9.39
244 TestMultiNode/serial/RestartKeepsNodes 89.84
245 TestMultiNode/serial/DeleteNode 5.38
246 TestMultiNode/serial/StopMultiNode 24
247 TestMultiNode/serial/RestartMultiNode 56.89
248 TestMultiNode/serial/ValidateNameConflict 32.19
253 TestPreload 114.88
255 TestScheduledStopUnix 108.26
258 TestInsufficientStorage 10.7
259 TestRunningBinaryUpgrade 98.9
261 TestKubernetesUpgrade 361.83
262 TestMissingContainerUpgrade 151.23
264 TestPause/serial/Start 68.47
266 TestNoKubernetes/serial/StartNoK8sWithVersion 0.1
267 TestNoKubernetes/serial/StartWithK8s 43.93
268 TestNoKubernetes/serial/StartWithStopK8s 16.59
269 TestNoKubernetes/serial/Start 5.58
270 TestNoKubernetes/serial/VerifyK8sNotRunning 0.27
271 TestNoKubernetes/serial/ProfileList 1.01
272 TestNoKubernetes/serial/Stop 1.25
273 TestPause/serial/SecondStartNoReconfiguration 6.68
274 TestNoKubernetes/serial/StartNoArgs 6.86
275 TestPause/serial/Pause 0.91
276 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.32
277 TestPause/serial/VerifyStatus 0.38
278 TestPause/serial/Unpause 0.68
279 TestPause/serial/PauseAgain 0.9
280 TestPause/serial/DeletePaused 3.02
281 TestPause/serial/VerifyDeletedResources 0.16
282 TestStoppedBinaryUpgrade/Setup 0.71
283 TestStoppedBinaryUpgrade/Upgrade 110.34
284 TestStoppedBinaryUpgrade/MinikubeLogs 1.04
299 TestNetworkPlugins/group/false 5.5
304 TestStartStop/group/old-k8s-version/serial/FirstStart 117.49
305 TestStartStop/group/old-k8s-version/serial/DeployApp 8.62
306 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.19
307 TestStartStop/group/old-k8s-version/serial/Stop 12.11
308 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.18
311 TestStartStop/group/no-preload/serial/FirstStart 84.77
312 TestStartStop/group/no-preload/serial/DeployApp 9.36
313 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.15
314 TestStartStop/group/no-preload/serial/Stop 12.08
315 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.19
316 TestStartStop/group/no-preload/serial/SecondStart 268.18
317 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6.01
318 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.21
319 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.33
320 TestStartStop/group/old-k8s-version/serial/Pause 3.14
322 TestStartStop/group/embed-certs/serial/FirstStart 63.09
323 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6.03
324 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 6.18
325 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 1
326 TestStartStop/group/no-preload/serial/Pause 3.65
328 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 78.42
329 TestStartStop/group/embed-certs/serial/DeployApp 8.42
330 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.2
331 TestStartStop/group/embed-certs/serial/Stop 12.33
332 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.17
333 TestStartStop/group/embed-certs/serial/SecondStart 280.71
334 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 7.4
335 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.29
336 TestStartStop/group/default-k8s-diff-port/serial/Stop 12.36
337 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.18
338 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 268.29
339 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6
340 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.12
341 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.25
342 TestStartStop/group/embed-certs/serial/Pause 3.41
344 TestStartStop/group/newest-cni/serial/FirstStart 44.85
345 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6.01
346 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 6.13
347 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.32
348 TestStartStop/group/default-k8s-diff-port/serial/Pause 3.73
349 TestNetworkPlugins/group/auto/Start 72.47
350 TestStartStop/group/newest-cni/serial/DeployApp 0
351 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.49
352 TestStartStop/group/newest-cni/serial/Stop 1.34
353 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.26
354 TestStartStop/group/newest-cni/serial/SecondStart 24.59
355 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
356 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
357 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.79
358 TestStartStop/group/newest-cni/serial/Pause 3.04
359 TestNetworkPlugins/group/kindnet/Start 76.52
360 TestNetworkPlugins/group/auto/KubeletFlags 0.41
361 TestNetworkPlugins/group/auto/NetCatPod 11.48
362 TestNetworkPlugins/group/auto/DNS 0.23
363 TestNetworkPlugins/group/auto/Localhost 0.17
364 TestNetworkPlugins/group/auto/HairPin 0.14
365 TestNetworkPlugins/group/calico/Start 66.32
366 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
367 TestNetworkPlugins/group/kindnet/KubeletFlags 0.44
368 TestNetworkPlugins/group/kindnet/NetCatPod 9.38
369 TestNetworkPlugins/group/kindnet/DNS 0.22
370 TestNetworkPlugins/group/kindnet/Localhost 0.17
371 TestNetworkPlugins/group/kindnet/HairPin 0.19
372 TestNetworkPlugins/group/custom-flannel/Start 66.16
373 TestNetworkPlugins/group/calico/ControllerPod 6.07
374 TestNetworkPlugins/group/calico/KubeletFlags 0.38
375 TestNetworkPlugins/group/calico/NetCatPod 10.48
376 TestNetworkPlugins/group/calico/DNS 0.26
377 TestNetworkPlugins/group/calico/Localhost 0.19
378 TestNetworkPlugins/group/calico/HairPin 0.32
379 TestNetworkPlugins/group/enable-default-cni/Start 86.61
380 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.37
381 TestNetworkPlugins/group/custom-flannel/NetCatPod 11.36
382 TestNetworkPlugins/group/custom-flannel/DNS 0.2
383 TestNetworkPlugins/group/custom-flannel/Localhost 0.19
384 TestNetworkPlugins/group/custom-flannel/HairPin 0.18
385 TestNetworkPlugins/group/flannel/Start 63.14
386 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.28
387 TestNetworkPlugins/group/enable-default-cni/NetCatPod 10.29
388 TestNetworkPlugins/group/enable-default-cni/DNS 0.28
389 TestNetworkPlugins/group/enable-default-cni/Localhost 0.26
390 TestNetworkPlugins/group/enable-default-cni/HairPin 0.24
391 TestNetworkPlugins/group/flannel/ControllerPod 6.01
392 TestNetworkPlugins/group/bridge/Start 53.62
393 TestNetworkPlugins/group/flannel/KubeletFlags 0.38
394 TestNetworkPlugins/group/flannel/NetCatPod 10.33
395 TestNetworkPlugins/group/flannel/DNS 0.24
396 TestNetworkPlugins/group/flannel/Localhost 0.22
397 TestNetworkPlugins/group/flannel/HairPin 0.2
398 TestNetworkPlugins/group/bridge/KubeletFlags 0.29
399 TestNetworkPlugins/group/bridge/NetCatPod 8.29
400 TestNetworkPlugins/group/bridge/DNS 0.18
401 TestNetworkPlugins/group/bridge/Localhost 0.15
402 TestNetworkPlugins/group/bridge/HairPin 0.14
x
+
TestDownloadOnly/v1.20.0/json-events (9.48s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-024661 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-024661 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (9.481816097s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (9.48s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-024661
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-024661: exit status 85 (87.483938ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-024661 | jenkins | v1.33.1 | 03 Aug 24 22:48 UTC |          |
	|         | -p download-only-024661        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/03 22:48:36
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.22.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0803 22:48:36.759676 1185708 out.go:291] Setting OutFile to fd 1 ...
	I0803 22:48:36.759915 1185708 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 22:48:36.759943 1185708 out.go:304] Setting ErrFile to fd 2...
	I0803 22:48:36.759960 1185708 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 22:48:36.760216 1185708 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19364-1180294/.minikube/bin
	W0803 22:48:36.760380 1185708 root.go:314] Error reading config file at /home/jenkins/minikube-integration/19364-1180294/.minikube/config/config.json: open /home/jenkins/minikube-integration/19364-1180294/.minikube/config/config.json: no such file or directory
	I0803 22:48:36.760863 1185708 out.go:298] Setting JSON to true
	I0803 22:48:36.761756 1185708 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":27062,"bootTime":1722698255,"procs":149,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0803 22:48:36.761850 1185708 start.go:139] virtualization:  
	I0803 22:48:36.765596 1185708 out.go:97] [download-only-024661] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	W0803 22:48:36.765750 1185708 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/19364-1180294/.minikube/cache/preloaded-tarball: no such file or directory
	I0803 22:48:36.765795 1185708 notify.go:220] Checking for updates...
	I0803 22:48:36.767889 1185708 out.go:169] MINIKUBE_LOCATION=19364
	I0803 22:48:36.769678 1185708 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0803 22:48:36.771934 1185708 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19364-1180294/kubeconfig
	I0803 22:48:36.774233 1185708 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19364-1180294/.minikube
	I0803 22:48:36.776308 1185708 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W0803 22:48:36.780255 1185708 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0803 22:48:36.780534 1185708 driver.go:392] Setting default libvirt URI to qemu:///system
	I0803 22:48:36.800639 1185708 docker.go:123] docker version: linux-27.1.1:Docker Engine - Community
	I0803 22:48:36.800754 1185708 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0803 22:48:36.860639 1185708 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:51 SystemTime:2024-08-03 22:48:36.851217354 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1066-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214896640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41 Expected:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1]] Warnings:<nil>}}
	I0803 22:48:36.860770 1185708 docker.go:307] overlay module found
	I0803 22:48:36.863298 1185708 out.go:97] Using the docker driver based on user configuration
	I0803 22:48:36.863328 1185708 start.go:297] selected driver: docker
	I0803 22:48:36.863335 1185708 start.go:901] validating driver "docker" against <nil>
	I0803 22:48:36.863459 1185708 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0803 22:48:36.917593 1185708 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:51 SystemTime:2024-08-03 22:48:36.908049278 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1066-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214896640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41 Expected:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1]] Warnings:<nil>}}
	I0803 22:48:36.917758 1185708 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0803 22:48:36.918044 1185708 start_flags.go:393] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I0803 22:48:36.918200 1185708 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0803 22:48:36.920933 1185708 out.go:169] Using Docker driver with root privileges
	I0803 22:48:36.923016 1185708 cni.go:84] Creating CNI manager for ""
	I0803 22:48:36.923035 1185708 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0803 22:48:36.923048 1185708 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0803 22:48:36.923162 1185708 start.go:340] cluster config:
	{Name:download-only-024661 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-024661 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0803 22:48:36.925489 1185708 out.go:97] Starting "download-only-024661" primary control-plane node in "download-only-024661" cluster
	I0803 22:48:36.925508 1185708 cache.go:121] Beginning downloading kic base image for docker with containerd
	I0803 22:48:36.927384 1185708 out.go:97] Pulling base image v0.0.44-1721902582-19326 ...
	I0803 22:48:36.927406 1185708 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime containerd
	I0803 22:48:36.927551 1185708 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 in local docker daemon
	I0803 22:48:36.941776 1185708 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 to local cache
	I0803 22:48:36.941954 1185708 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 in local cache directory
	I0803 22:48:36.942053 1185708 image.go:148] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 to local cache
	I0803 22:48:36.992384 1185708 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4
	I0803 22:48:36.992409 1185708 cache.go:56] Caching tarball of preloaded images
	I0803 22:48:36.993036 1185708 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime containerd
	I0803 22:48:36.995437 1185708 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0803 22:48:36.995464 1185708 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4 ...
	I0803 22:48:37.078039 1185708 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4?checksum=md5:7e3d48ccb9f143791669d02e14ce1643 -> /home/jenkins/minikube-integration/19364-1180294/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4
	
	
	* The control-plane node download-only-024661 host does not exist
	  To start a cluster, run: "minikube start -p download-only-024661"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.09s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.21s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-024661
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/json-events (6.88s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-430970 --force --alsologtostderr --kubernetes-version=v1.30.3 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-430970 --force --alsologtostderr --kubernetes-version=v1.30.3 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (6.877810627s)
--- PASS: TestDownloadOnly/v1.30.3/json-events (6.88s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/preload-exists
--- PASS: TestDownloadOnly/v1.30.3/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-430970
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-430970: exit status 85 (74.167693ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-024661 | jenkins | v1.33.1 | 03 Aug 24 22:48 UTC |                     |
	|         | -p download-only-024661        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.33.1 | 03 Aug 24 22:48 UTC | 03 Aug 24 22:48 UTC |
	| delete  | -p download-only-024661        | download-only-024661 | jenkins | v1.33.1 | 03 Aug 24 22:48 UTC | 03 Aug 24 22:48 UTC |
	| start   | -o=json --download-only        | download-only-430970 | jenkins | v1.33.1 | 03 Aug 24 22:48 UTC |                     |
	|         | -p download-only-430970        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3   |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/03 22:48:46
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.22.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0803 22:48:46.672965 1185912 out.go:291] Setting OutFile to fd 1 ...
	I0803 22:48:46.673165 1185912 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 22:48:46.673191 1185912 out.go:304] Setting ErrFile to fd 2...
	I0803 22:48:46.673208 1185912 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 22:48:46.673496 1185912 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19364-1180294/.minikube/bin
	I0803 22:48:46.673943 1185912 out.go:298] Setting JSON to true
	I0803 22:48:46.674815 1185912 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":27072,"bootTime":1722698255,"procs":147,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0803 22:48:46.674906 1185912 start.go:139] virtualization:  
	I0803 22:48:46.677868 1185912 out.go:97] [download-only-430970] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	I0803 22:48:46.678072 1185912 notify.go:220] Checking for updates...
	I0803 22:48:46.680202 1185912 out.go:169] MINIKUBE_LOCATION=19364
	I0803 22:48:46.682440 1185912 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0803 22:48:46.684443 1185912 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19364-1180294/kubeconfig
	I0803 22:48:46.686616 1185912 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19364-1180294/.minikube
	I0803 22:48:46.688636 1185912 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W0803 22:48:46.692664 1185912 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0803 22:48:46.692945 1185912 driver.go:392] Setting default libvirt URI to qemu:///system
	I0803 22:48:46.715146 1185912 docker.go:123] docker version: linux-27.1.1:Docker Engine - Community
	I0803 22:48:46.715254 1185912 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0803 22:48:46.769756 1185912 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:44 SystemTime:2024-08-03 22:48:46.760661678 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1066-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214896640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41 Expected:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1]] Warnings:<nil>}}
	I0803 22:48:46.769876 1185912 docker.go:307] overlay module found
	I0803 22:48:46.772127 1185912 out.go:97] Using the docker driver based on user configuration
	I0803 22:48:46.772155 1185912 start.go:297] selected driver: docker
	I0803 22:48:46.772162 1185912 start.go:901] validating driver "docker" against <nil>
	I0803 22:48:46.772266 1185912 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0803 22:48:46.821668 1185912 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:44 SystemTime:2024-08-03 22:48:46.812951868 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1066-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214896640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41 Expected:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1]] Warnings:<nil>}}
	I0803 22:48:46.821838 1185912 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0803 22:48:46.822111 1185912 start_flags.go:393] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I0803 22:48:46.822306 1185912 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0803 22:48:46.824584 1185912 out.go:169] Using Docker driver with root privileges
	I0803 22:48:46.826630 1185912 cni.go:84] Creating CNI manager for ""
	I0803 22:48:46.826646 1185912 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0803 22:48:46.826655 1185912 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0803 22:48:46.826756 1185912 start.go:340] cluster config:
	{Name:download-only-430970 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:download-only-430970 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0803 22:48:46.828792 1185912 out.go:97] Starting "download-only-430970" primary control-plane node in "download-only-430970" cluster
	I0803 22:48:46.828811 1185912 cache.go:121] Beginning downloading kic base image for docker with containerd
	I0803 22:48:46.830978 1185912 out.go:97] Pulling base image v0.0.44-1721902582-19326 ...
	I0803 22:48:46.831007 1185912 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime containerd
	I0803 22:48:46.831172 1185912 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 in local docker daemon
	I0803 22:48:46.866992 1185912 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 to local cache
	I0803 22:48:46.867102 1185912 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 in local cache directory
	I0803 22:48:46.867126 1185912 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 in local cache directory, skipping pull
	I0803 22:48:46.867135 1185912 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 exists in cache, skipping pull
	I0803 22:48:46.867143 1185912 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 as a tarball
	I0803 22:48:46.889432 1185912 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.3/preloaded-images-k8s-v18-v1.30.3-containerd-overlay2-arm64.tar.lz4
	I0803 22:48:46.889464 1185912 cache.go:56] Caching tarball of preloaded images
	I0803 22:48:46.890137 1185912 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime containerd
	I0803 22:48:46.896807 1185912 out.go:97] Downloading Kubernetes v1.30.3 preload ...
	I0803 22:48:46.896857 1185912 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.30.3-containerd-overlay2-arm64.tar.lz4 ...
	I0803 22:48:46.977402 1185912 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.3/preloaded-images-k8s-v18-v1.30.3-containerd-overlay2-arm64.tar.lz4?checksum=md5:2969442dcdf6412905c6484ccc8dd1ed -> /home/jenkins/minikube-integration/19364-1180294/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-containerd-overlay2-arm64.tar.lz4
	
	
	* The control-plane node download-only-430970 host does not exist
	  To start a cluster, run: "minikube start -p download-only-430970"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.30.3/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/DeleteAll (0.19s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.30.3/DeleteAll (0.19s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-430970
--- PASS: TestDownloadOnly/v1.30.3/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-rc.0/json-events (10.33s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-rc.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-921243 --force --alsologtostderr --kubernetes-version=v1.31.0-rc.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-921243 --force --alsologtostderr --kubernetes-version=v1.31.0-rc.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (10.333662136s)
--- PASS: TestDownloadOnly/v1.31.0-rc.0/json-events (10.33s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-rc.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-rc.0/preload-exists
--- PASS: TestDownloadOnly/v1.31.0-rc.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-rc.0/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-rc.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-921243
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-921243: exit status 85 (79.047692ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |               Args                |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only           | download-only-024661 | jenkins | v1.33.1 | 03 Aug 24 22:48 UTC |                     |
	|         | -p download-only-024661           |                      |         |         |                     |                     |
	|         | --force --alsologtostderr         |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0      |                      |         |         |                     |                     |
	|         | --container-runtime=containerd    |                      |         |         |                     |                     |
	|         | --driver=docker                   |                      |         |         |                     |                     |
	|         | --container-runtime=containerd    |                      |         |         |                     |                     |
	| delete  | --all                             | minikube             | jenkins | v1.33.1 | 03 Aug 24 22:48 UTC | 03 Aug 24 22:48 UTC |
	| delete  | -p download-only-024661           | download-only-024661 | jenkins | v1.33.1 | 03 Aug 24 22:48 UTC | 03 Aug 24 22:48 UTC |
	| start   | -o=json --download-only           | download-only-430970 | jenkins | v1.33.1 | 03 Aug 24 22:48 UTC |                     |
	|         | -p download-only-430970           |                      |         |         |                     |                     |
	|         | --force --alsologtostderr         |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3      |                      |         |         |                     |                     |
	|         | --container-runtime=containerd    |                      |         |         |                     |                     |
	|         | --driver=docker                   |                      |         |         |                     |                     |
	|         | --container-runtime=containerd    |                      |         |         |                     |                     |
	| delete  | --all                             | minikube             | jenkins | v1.33.1 | 03 Aug 24 22:48 UTC | 03 Aug 24 22:48 UTC |
	| delete  | -p download-only-430970           | download-only-430970 | jenkins | v1.33.1 | 03 Aug 24 22:48 UTC | 03 Aug 24 22:48 UTC |
	| start   | -o=json --download-only           | download-only-921243 | jenkins | v1.33.1 | 03 Aug 24 22:48 UTC |                     |
	|         | -p download-only-921243           |                      |         |         |                     |                     |
	|         | --force --alsologtostderr         |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-rc.0 |                      |         |         |                     |                     |
	|         | --container-runtime=containerd    |                      |         |         |                     |                     |
	|         | --driver=docker                   |                      |         |         |                     |                     |
	|         | --container-runtime=containerd    |                      |         |         |                     |                     |
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/03 22:48:53
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.22.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0803 22:48:53.959855 1186118 out.go:291] Setting OutFile to fd 1 ...
	I0803 22:48:53.961374 1186118 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 22:48:53.961386 1186118 out.go:304] Setting ErrFile to fd 2...
	I0803 22:48:53.961392 1186118 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 22:48:53.961734 1186118 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19364-1180294/.minikube/bin
	I0803 22:48:53.962407 1186118 out.go:298] Setting JSON to true
	I0803 22:48:53.963219 1186118 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":27079,"bootTime":1722698255,"procs":147,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0803 22:48:53.963286 1186118 start.go:139] virtualization:  
	I0803 22:48:53.965452 1186118 out.go:97] [download-only-921243] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	I0803 22:48:53.965721 1186118 notify.go:220] Checking for updates...
	I0803 22:48:53.967593 1186118 out.go:169] MINIKUBE_LOCATION=19364
	I0803 22:48:53.969795 1186118 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0803 22:48:53.971297 1186118 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19364-1180294/kubeconfig
	I0803 22:48:53.973066 1186118 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19364-1180294/.minikube
	I0803 22:48:53.975323 1186118 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W0803 22:48:53.978320 1186118 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0803 22:48:53.978573 1186118 driver.go:392] Setting default libvirt URI to qemu:///system
	I0803 22:48:54.006561 1186118 docker.go:123] docker version: linux-27.1.1:Docker Engine - Community
	I0803 22:48:54.006674 1186118 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0803 22:48:54.069625 1186118 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:25 OomKillDisable:true NGoroutines:44 SystemTime:2024-08-03 22:48:54.059027516 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1066-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214896640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41 Expected:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1]] Warnings:<nil>}}
	I0803 22:48:54.069807 1186118 docker.go:307] overlay module found
	I0803 22:48:54.071665 1186118 out.go:97] Using the docker driver based on user configuration
	I0803 22:48:54.071698 1186118 start.go:297] selected driver: docker
	I0803 22:48:54.071706 1186118 start.go:901] validating driver "docker" against <nil>
	I0803 22:48:54.071836 1186118 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0803 22:48:54.126128 1186118 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:25 OomKillDisable:true NGoroutines:44 SystemTime:2024-08-03 22:48:54.117105511 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1066-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214896640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41 Expected:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1]] Warnings:<nil>}}
	I0803 22:48:54.126306 1186118 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0803 22:48:54.126588 1186118 start_flags.go:393] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I0803 22:48:54.126757 1186118 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0803 22:48:54.128775 1186118 out.go:169] Using Docker driver with root privileges
	I0803 22:48:54.130627 1186118 cni.go:84] Creating CNI manager for ""
	I0803 22:48:54.130645 1186118 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0803 22:48:54.130656 1186118 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0803 22:48:54.130772 1186118 start.go:340] cluster config:
	{Name:download-only-921243 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0-rc.0 ClusterName:download-only-921243 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.loc
al ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-rc.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0
s}
	I0803 22:48:54.132814 1186118 out.go:97] Starting "download-only-921243" primary control-plane node in "download-only-921243" cluster
	I0803 22:48:54.132835 1186118 cache.go:121] Beginning downloading kic base image for docker with containerd
	I0803 22:48:54.135347 1186118 out.go:97] Pulling base image v0.0.44-1721902582-19326 ...
	I0803 22:48:54.135372 1186118 preload.go:131] Checking if preload exists for k8s version v1.31.0-rc.0 and runtime containerd
	I0803 22:48:54.135566 1186118 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 in local docker daemon
	I0803 22:48:54.150689 1186118 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 to local cache
	I0803 22:48:54.150829 1186118 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 in local cache directory
	I0803 22:48:54.150863 1186118 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 in local cache directory, skipping pull
	I0803 22:48:54.150874 1186118 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 exists in cache, skipping pull
	I0803 22:48:54.150882 1186118 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 as a tarball
	I0803 22:48:54.191039 1186118 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.0-rc.0/preloaded-images-k8s-v18-v1.31.0-rc.0-containerd-overlay2-arm64.tar.lz4
	I0803 22:48:54.191062 1186118 cache.go:56] Caching tarball of preloaded images
	I0803 22:48:54.191766 1186118 preload.go:131] Checking if preload exists for k8s version v1.31.0-rc.0 and runtime containerd
	I0803 22:48:54.194532 1186118 out.go:97] Downloading Kubernetes v1.31.0-rc.0 preload ...
	I0803 22:48:54.194560 1186118 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.31.0-rc.0-containerd-overlay2-arm64.tar.lz4 ...
	I0803 22:48:54.286788 1186118 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.0-rc.0/preloaded-images-k8s-v18-v1.31.0-rc.0-containerd-overlay2-arm64.tar.lz4?checksum=md5:9f4f64d897eefd701781dd1aad6e4f21 -> /home/jenkins/minikube-integration/19364-1180294/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-rc.0-containerd-overlay2-arm64.tar.lz4
	I0803 22:49:00.133298 1186118 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.31.0-rc.0-containerd-overlay2-arm64.tar.lz4 ...
	I0803 22:49:00.133444 1186118 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/19364-1180294/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-rc.0-containerd-overlay2-arm64.tar.lz4 ...
	I0803 22:49:01.093823 1186118 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0-rc.0 on containerd
	I0803 22:49:01.094187 1186118 profile.go:143] Saving config to /home/jenkins/minikube-integration/19364-1180294/.minikube/profiles/download-only-921243/config.json ...
	I0803 22:49:01.094223 1186118 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19364-1180294/.minikube/profiles/download-only-921243/config.json: {Name:mke4d013d58d1640cc38f2e002a737a942e89f7a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0803 22:49:01.094963 1186118 preload.go:131] Checking if preload exists for k8s version v1.31.0-rc.0 and runtime containerd
	I0803 22:49:01.095135 1186118 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.0-rc.0/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.0-rc.0/bin/linux/arm64/kubectl.sha256 -> /home/jenkins/minikube-integration/19364-1180294/.minikube/cache/linux/arm64/v1.31.0-rc.0/kubectl
	
	
	* The control-plane node download-only-921243 host does not exist
	  To start a cluster, run: "minikube start -p download-only-921243"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.0-rc.0/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-rc.0/DeleteAll (0.2s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-rc.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.31.0-rc.0/DeleteAll (0.20s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-rc.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-rc.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-921243
--- PASS: TestDownloadOnly/v1.31.0-rc.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                    
x
+
TestBinaryMirror (0.56s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p binary-mirror-761872 --alsologtostderr --binary-mirror http://127.0.0.1:37059 --driver=docker  --container-runtime=containerd
helpers_test.go:175: Cleaning up "binary-mirror-761872" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p binary-mirror-761872
--- PASS: TestBinaryMirror (0.56s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.08s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1037: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-369401
addons_test.go:1037: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable dashboard -p addons-369401: exit status 85 (75.481467ms)

                                                
                                                
-- stdout --
	* Profile "addons-369401" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-369401"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.08s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.09s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1048: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-369401
addons_test.go:1048: (dbg) Non-zero exit: out/minikube-linux-arm64 addons disable dashboard -p addons-369401: exit status 85 (88.284422ms)

                                                
                                                
-- stdout --
	* Profile "addons-369401" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-369401"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.09s)

                                                
                                    
x
+
TestAddons/Setup (221.93s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:110: (dbg) Run:  out/minikube-linux-arm64 start -p addons-369401 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=docker  --container-runtime=containerd --addons=ingress --addons=ingress-dns
addons_test.go:110: (dbg) Done: out/minikube-linux-arm64 start -p addons-369401 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=docker  --container-runtime=containerd --addons=ingress --addons=ingress-dns: (3m41.929619556s)
--- PASS: TestAddons/Setup (221.93s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.18s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:656: (dbg) Run:  kubectl --context addons-369401 create ns new-namespace
addons_test.go:670: (dbg) Run:  kubectl --context addons-369401 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.18s)

                                                
                                    
x
+
TestAddons/parallel/Registry (16.35s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:332: registry stabilized in 6.133956ms
addons_test.go:334: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-698f998955-d9572" [a38ebabe-72ac-412d-b25d-2bdc0ed934b1] Running
addons_test.go:334: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.004345561s
addons_test.go:337: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-tc486" [2188b0ce-6a1e-4e35-adaa-d8123d8321cb] Running
addons_test.go:337: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 6.006120469s
addons_test.go:342: (dbg) Run:  kubectl --context addons-369401 delete po -l run=registry-test --now
addons_test.go:347: (dbg) Run:  kubectl --context addons-369401 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:347: (dbg) Done: kubectl --context addons-369401 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (3.317158206s)
addons_test.go:361: (dbg) Run:  out/minikube-linux-arm64 -p addons-369401 ip
2024/08/03 22:56:42 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:390: (dbg) Run:  out/minikube-linux-arm64 -p addons-369401 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (16.35s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (20.39s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-369401 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-369401 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-369401 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [e512e325-2889-4fa4-a3e9-f66d3062245a] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [e512e325-2889-4fa4-a3e9-f66d3062245a] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 9.00411145s
addons_test.go:264: (dbg) Run:  out/minikube-linux-arm64 -p addons-369401 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:288: (dbg) Run:  kubectl --context addons-369401 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-linux-arm64 -p addons-369401 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:308: (dbg) Run:  out/minikube-linux-arm64 -p addons-369401 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:308: (dbg) Done: out/minikube-linux-arm64 -p addons-369401 addons disable ingress-dns --alsologtostderr -v=1: (1.519608858s)
addons_test.go:313: (dbg) Run:  out/minikube-linux-arm64 -p addons-369401 addons disable ingress --alsologtostderr -v=1
addons_test.go:313: (dbg) Done: out/minikube-linux-arm64 -p addons-369401 addons disable ingress --alsologtostderr -v=1: (8.112375243s)
--- PASS: TestAddons/parallel/Ingress (20.39s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (11.83s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:848: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-fwqrb" [b098e080-bfc9-416c-94a9-5f7c27c62ada] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:848: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.003954496s
addons_test.go:851: (dbg) Run:  out/minikube-linux-arm64 addons disable inspektor-gadget -p addons-369401
addons_test.go:851: (dbg) Done: out/minikube-linux-arm64 addons disable inspektor-gadget -p addons-369401: (5.822702752s)
--- PASS: TestAddons/parallel/InspektorGadget (11.83s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.97s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:409: metrics-server stabilized in 2.729032ms
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-c59844bb4-6p6m4" [1cbf5238-59c7-45d1-bf04-fa4f91293308] Running
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.005266167s
addons_test.go:417: (dbg) Run:  kubectl --context addons-369401 top pods -n kube-system
addons_test.go:434: (dbg) Run:  out/minikube-linux-arm64 -p addons-369401 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.97s)

                                                
                                    
x
+
TestAddons/parallel/CSI (54.98s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:567: csi-hostpath-driver pods stabilized in 9.491914ms
addons_test.go:570: (dbg) Run:  kubectl --context addons-369401 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:575: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-369401 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-369401 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-369401 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-369401 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-369401 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-369401 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-369401 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-369401 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-369401 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-369401 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-369401 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-369401 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-369401 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-369401 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-369401 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-369401 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-369401 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-369401 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-369401 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-369401 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:580: (dbg) Run:  kubectl --context addons-369401 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:585: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [08254b04-d4a0-4092-9d31-0eb9defb8622] Pending
helpers_test.go:344: "task-pv-pod" [08254b04-d4a0-4092-9d31-0eb9defb8622] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [08254b04-d4a0-4092-9d31-0eb9defb8622] Running
addons_test.go:585: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 10.003678425s
addons_test.go:590: (dbg) Run:  kubectl --context addons-369401 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:595: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-369401 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: 
helpers_test.go:419: (dbg) Run:  kubectl --context addons-369401 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:600: (dbg) Run:  kubectl --context addons-369401 delete pod task-pv-pod
addons_test.go:606: (dbg) Run:  kubectl --context addons-369401 delete pvc hpvc
addons_test.go:612: (dbg) Run:  kubectl --context addons-369401 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:617: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-369401 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-369401 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-369401 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-369401 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-369401 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-369401 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-369401 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:622: (dbg) Run:  kubectl --context addons-369401 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:627: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [0f8a1eb8-c77c-4f2f-a100-c3055bf3103c] Pending
helpers_test.go:344: "task-pv-pod-restore" [0f8a1eb8-c77c-4f2f-a100-c3055bf3103c] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [0f8a1eb8-c77c-4f2f-a100-c3055bf3103c] Running
addons_test.go:627: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.01547108s
addons_test.go:632: (dbg) Run:  kubectl --context addons-369401 delete pod task-pv-pod-restore
addons_test.go:632: (dbg) Done: kubectl --context addons-369401 delete pod task-pv-pod-restore: (1.413668978s)
addons_test.go:636: (dbg) Run:  kubectl --context addons-369401 delete pvc hpvc-restore
addons_test.go:640: (dbg) Run:  kubectl --context addons-369401 delete volumesnapshot new-snapshot-demo
addons_test.go:644: (dbg) Run:  out/minikube-linux-arm64 -p addons-369401 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:644: (dbg) Done: out/minikube-linux-arm64 -p addons-369401 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.983133326s)
addons_test.go:648: (dbg) Run:  out/minikube-linux-arm64 -p addons-369401 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:648: (dbg) Done: out/minikube-linux-arm64 -p addons-369401 addons disable volumesnapshots --alsologtostderr -v=1: (1.141729059s)
--- PASS: TestAddons/parallel/CSI (54.98s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (16.82s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:830: (dbg) Run:  out/minikube-linux-arm64 addons enable headlamp -p addons-369401 --alsologtostderr -v=1
addons_test.go:830: (dbg) Done: out/minikube-linux-arm64 addons enable headlamp -p addons-369401 --alsologtostderr -v=1: (1.035832485s)
addons_test.go:835: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-7867546754-lz8bf" [0cc52d70-5578-4ccc-9a55-de2fcd42d7de] Pending
helpers_test.go:344: "headlamp-7867546754-lz8bf" [0cc52d70-5578-4ccc-9a55-de2fcd42d7de] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-7867546754-lz8bf" [0cc52d70-5578-4ccc-9a55-de2fcd42d7de] Running
addons_test.go:835: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 10.005202216s
addons_test.go:839: (dbg) Run:  out/minikube-linux-arm64 -p addons-369401 addons disable headlamp --alsologtostderr -v=1
addons_test.go:839: (dbg) Done: out/minikube-linux-arm64 -p addons-369401 addons disable headlamp --alsologtostderr -v=1: (5.773181958s)
--- PASS: TestAddons/parallel/Headlamp (16.82s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (6.59s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:867: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-5455fb9b69-56dr7" [8063c6cf-9a89-46c4-afc8-39f2634dbd2c] Running
addons_test.go:867: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 6.009788819s
addons_test.go:870: (dbg) Run:  out/minikube-linux-arm64 addons disable cloud-spanner -p addons-369401
--- PASS: TestAddons/parallel/CloudSpanner (6.59s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (52.53s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:982: (dbg) Run:  kubectl --context addons-369401 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:988: (dbg) Run:  kubectl --context addons-369401 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:992: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-369401 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-369401 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-369401 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-369401 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-369401 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-369401 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:995: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [9fc8644d-7f5c-4317-a1ca-b4112107f42e] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [9fc8644d-7f5c-4317-a1ca-b4112107f42e] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [9fc8644d-7f5c-4317-a1ca-b4112107f42e] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:995: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 3.003640028s
addons_test.go:1000: (dbg) Run:  kubectl --context addons-369401 get pvc test-pvc -o=json
addons_test.go:1009: (dbg) Run:  out/minikube-linux-arm64 -p addons-369401 ssh "cat /opt/local-path-provisioner/pvc-59fff3e4-2e58-48b2-934e-67741f455418_default_test-pvc/file1"
addons_test.go:1021: (dbg) Run:  kubectl --context addons-369401 delete pod test-local-path
addons_test.go:1025: (dbg) Run:  kubectl --context addons-369401 delete pvc test-pvc
addons_test.go:1029: (dbg) Run:  out/minikube-linux-arm64 -p addons-369401 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1029: (dbg) Done: out/minikube-linux-arm64 -p addons-369401 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.288072322s)
--- PASS: TestAddons/parallel/LocalPath (52.53s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.53s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1061: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-zhsj7" [124c1c15-5fe2-428e-b69b-b3114c15552e] Running
addons_test.go:1061: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.004542354s
addons_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 addons disable nvidia-device-plugin -p addons-369401
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.53s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (11.87s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1072: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-799879c74f-mr7dg" [0c667988-a7e9-4f83-a6eb-7b5f084be790] Running
addons_test.go:1072: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.003162948s
addons_test.go:1076: (dbg) Run:  out/minikube-linux-arm64 -p addons-369401 addons disable yakd --alsologtostderr -v=1
addons_test.go:1076: (dbg) Done: out/minikube-linux-arm64 -p addons-369401 addons disable yakd --alsologtostderr -v=1: (5.868059237s)
--- PASS: TestAddons/parallel/Yakd (11.87s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.31s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:174: (dbg) Run:  out/minikube-linux-arm64 stop -p addons-369401
addons_test.go:174: (dbg) Done: out/minikube-linux-arm64 stop -p addons-369401: (12.064636751s)
addons_test.go:178: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-369401
addons_test.go:182: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-369401
addons_test.go:187: (dbg) Run:  out/minikube-linux-arm64 addons disable gvisor -p addons-369401
--- PASS: TestAddons/StoppedEnableDisable (12.31s)

                                                
                                    
x
+
TestCertOptions (40.93s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-arm64 start -p cert-options-142692 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd
cert_options_test.go:49: (dbg) Done: out/minikube-linux-arm64 start -p cert-options-142692 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd: (38.324436913s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-arm64 -p cert-options-142692 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-142692 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-arm64 ssh -p cert-options-142692 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-142692" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-options-142692
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-options-142692: (1.985076338s)
--- PASS: TestCertOptions (40.93s)

                                                
                                    
x
+
TestCertExpiration (228.96s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-764783 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=containerd
cert_options_test.go:123: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-764783 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=containerd: (37.491780959s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-764783 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=containerd
cert_options_test.go:131: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-764783 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=containerd: (8.776585976s)
helpers_test.go:175: Cleaning up "cert-expiration-764783" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-expiration-764783
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-expiration-764783: (2.68893916s)
--- PASS: TestCertExpiration (228.96s)

                                                
                                    
x
+
TestForceSystemdFlag (42.6s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-flag-272336 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
docker_test.go:91: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-flag-272336 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (40.02983009s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-flag-272336 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-flag-272336" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-flag-272336
E0803 23:35:51.270293 1185702 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-1180294/.minikube/profiles/addons-369401/client.crt: no such file or directory
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-flag-272336: (2.187667005s)
--- PASS: TestForceSystemdFlag (42.60s)

                                                
                                    
x
+
TestForceSystemdEnv (47.48s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-env-180357 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
docker_test.go:155: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-env-180357 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (45.012941873s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-env-180357 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-env-180357" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-env-180357
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-env-180357: (2.139733866s)
--- PASS: TestForceSystemdEnv (47.48s)

                                                
                                    
x
+
TestDockerEnvContainerd (45.48s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with containerd true linux arm64
docker_test.go:181: (dbg) Run:  out/minikube-linux-arm64 start -p dockerenv-719705 --driver=docker  --container-runtime=containerd
docker_test.go:181: (dbg) Done: out/minikube-linux-arm64 start -p dockerenv-719705 --driver=docker  --container-runtime=containerd: (29.456433279s)
docker_test.go:189: (dbg) Run:  /bin/bash -c "out/minikube-linux-arm64 docker-env --ssh-host --ssh-add -p dockerenv-719705"
docker_test.go:189: (dbg) Done: /bin/bash -c "out/minikube-linux-arm64 docker-env --ssh-host --ssh-add -p dockerenv-719705": (1.137627182s)
docker_test.go:220: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-k9ngp01FEqiS/agent.1205119" SSH_AGENT_PID="1205120" DOCKER_HOST=ssh://docker@127.0.0.1:34258 docker version"
docker_test.go:243: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-k9ngp01FEqiS/agent.1205119" SSH_AGENT_PID="1205120" DOCKER_HOST=ssh://docker@127.0.0.1:34258 DOCKER_BUILDKIT=0 docker build -t local/minikube-dockerenv-containerd-test:latest testdata/docker-env"
docker_test.go:243: (dbg) Done: /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-k9ngp01FEqiS/agent.1205119" SSH_AGENT_PID="1205120" DOCKER_HOST=ssh://docker@127.0.0.1:34258 DOCKER_BUILDKIT=0 docker build -t local/minikube-dockerenv-containerd-test:latest testdata/docker-env": (1.262258342s)
docker_test.go:250: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-k9ngp01FEqiS/agent.1205119" SSH_AGENT_PID="1205120" DOCKER_HOST=ssh://docker@127.0.0.1:34258 docker image ls"
helpers_test.go:175: Cleaning up "dockerenv-719705" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p dockerenv-719705
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p dockerenv-719705: (2.026488303s)
--- PASS: TestDockerEnvContainerd (45.48s)

                                                
                                    
x
+
TestErrorSpam/setup (28.6s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -p nospam-821708 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-821708 --driver=docker  --container-runtime=containerd
error_spam_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -p nospam-821708 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-821708 --driver=docker  --container-runtime=containerd: (28.596485389s)
--- PASS: TestErrorSpam/setup (28.60s)

                                                
                                    
x
+
TestErrorSpam/start (0.72s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-821708 --log_dir /tmp/nospam-821708 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-821708 --log_dir /tmp/nospam-821708 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-821708 --log_dir /tmp/nospam-821708 start --dry-run
--- PASS: TestErrorSpam/start (0.72s)

                                                
                                    
x
+
TestErrorSpam/status (0.98s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-821708 --log_dir /tmp/nospam-821708 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-821708 --log_dir /tmp/nospam-821708 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-821708 --log_dir /tmp/nospam-821708 status
--- PASS: TestErrorSpam/status (0.98s)

                                                
                                    
x
+
TestErrorSpam/pause (1.64s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-821708 --log_dir /tmp/nospam-821708 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-821708 --log_dir /tmp/nospam-821708 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-821708 --log_dir /tmp/nospam-821708 pause
--- PASS: TestErrorSpam/pause (1.64s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.77s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-821708 --log_dir /tmp/nospam-821708 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-821708 --log_dir /tmp/nospam-821708 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-821708 --log_dir /tmp/nospam-821708 unpause
--- PASS: TestErrorSpam/unpause (1.77s)

                                                
                                    
x
+
TestErrorSpam/stop (1.41s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-821708 --log_dir /tmp/nospam-821708 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-arm64 -p nospam-821708 --log_dir /tmp/nospam-821708 stop: (1.219559764s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-821708 --log_dir /tmp/nospam-821708 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-821708 --log_dir /tmp/nospam-821708 stop
--- PASS: TestErrorSpam/stop (1.41s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: /home/jenkins/minikube-integration/19364-1180294/.minikube/files/etc/test/nested/copy/1185702/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (69.68s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-linux-arm64 start -p functional-851027 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd
functional_test.go:2230: (dbg) Done: out/minikube-linux-arm64 start -p functional-851027 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd: (1m9.675746554s)
--- PASS: TestFunctional/serial/StartWithProxy (69.68s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (6.31s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-linux-arm64 start -p functional-851027 --alsologtostderr -v=8
functional_test.go:655: (dbg) Done: out/minikube-linux-arm64 start -p functional-851027 --alsologtostderr -v=8: (6.305251332s)
functional_test.go:659: soft start took 6.309440407s for "functional-851027" cluster.
--- PASS: TestFunctional/serial/SoftStart (6.31s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.06s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-851027 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.11s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (4.47s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-linux-arm64 -p functional-851027 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Done: out/minikube-linux-arm64 -p functional-851027 cache add registry.k8s.io/pause:3.1: (1.648343108s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-arm64 -p functional-851027 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Done: out/minikube-linux-arm64 -p functional-851027 cache add registry.k8s.io/pause:3.3: (1.483159215s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-arm64 -p functional-851027 cache add registry.k8s.io/pause:latest
functional_test.go:1045: (dbg) Done: out/minikube-linux-arm64 -p functional-851027 cache add registry.k8s.io/pause:latest: (1.335472081s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (4.47s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.51s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-851027 /tmp/TestFunctionalserialCacheCmdcacheadd_local3601780893/001
functional_test.go:1085: (dbg) Run:  out/minikube-linux-arm64 -p functional-851027 cache add minikube-local-cache-test:functional-851027
functional_test.go:1090: (dbg) Run:  out/minikube-linux-arm64 -p functional-851027 cache delete minikube-local-cache-test:functional-851027
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-851027
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.51s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-linux-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.3s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-linux-arm64 -p functional-851027 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.30s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (2.12s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-linux-arm64 -p functional-851027 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Run:  out/minikube-linux-arm64 -p functional-851027 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-851027 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (298.377286ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-linux-arm64 -p functional-851027 cache reload
functional_test.go:1154: (dbg) Done: out/minikube-linux-arm64 -p functional-851027 cache reload: (1.226124433s)
functional_test.go:1159: (dbg) Run:  out/minikube-linux-arm64 -p functional-851027 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (2.12s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-linux-arm64 -p functional-851027 kubectl -- --context functional-851027 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.14s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-851027 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.13s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (46.57s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-linux-arm64 start -p functional-851027 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:753: (dbg) Done: out/minikube-linux-arm64 start -p functional-851027 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (46.567266753s)
functional_test.go:757: restart took 46.567377712s for "functional-851027" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (46.57s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-851027 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.09s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.71s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-linux-arm64 -p functional-851027 logs
functional_test.go:1232: (dbg) Done: out/minikube-linux-arm64 -p functional-851027 logs: (1.704805325s)
--- PASS: TestFunctional/serial/LogsCmd (1.71s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.77s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-linux-arm64 -p functional-851027 logs --file /tmp/TestFunctionalserialLogsFileCmd2992162367/001/logs.txt
functional_test.go:1246: (dbg) Done: out/minikube-linux-arm64 -p functional-851027 logs --file /tmp/TestFunctionalserialLogsFileCmd2992162367/001/logs.txt: (1.768070038s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.77s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.46s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-851027 apply -f testdata/invalidsvc.yaml
functional_test.go:2331: (dbg) Run:  out/minikube-linux-arm64 service invalid-svc -p functional-851027
functional_test.go:2331: (dbg) Non-zero exit: out/minikube-linux-arm64 service invalid-svc -p functional-851027: exit status 115 (641.898889ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|---------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL            |
	|-----------|-------------|-------------|---------------------------|
	| default   | invalid-svc |          80 | http://192.168.49.2:32002 |
	|-----------|-------------|-------------|---------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2323: (dbg) Run:  kubectl --context functional-851027 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.46s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-851027 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-851027 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-851027 config get cpus: exit status 14 (74.923054ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-851027 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-851027 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-851027 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-851027 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-851027 config get cpus: exit status 14 (66.405157ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (7.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-851027 --alsologtostderr -v=1]
E0803 23:02:48.300852 1185702 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-1180294/.minikube/profiles/addons-369401/client.crt: no such file or directory
E0803 23:02:48.381415 1185702 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-1180294/.minikube/profiles/addons-369401/client.crt: no such file or directory
E0803 23:02:48.542402 1185702 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-1180294/.minikube/profiles/addons-369401/client.crt: no such file or directory
E0803 23:02:48.863454 1185702 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-1180294/.minikube/profiles/addons-369401/client.crt: no such file or directory
E0803 23:02:49.504534 1185702 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-1180294/.minikube/profiles/addons-369401/client.crt: no such file or directory
functional_test.go:906: (dbg) stopping [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-851027 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 1220424: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (7.31s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-linux-arm64 start -p functional-851027 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd
functional_test.go:970: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-851027 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd: exit status 23 (228.27925ms)

                                                
                                                
-- stdout --
	* [functional-851027] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19364
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19364-1180294/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19364-1180294/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0803 23:02:47.760000 1220008 out.go:291] Setting OutFile to fd 1 ...
	I0803 23:02:47.760954 1220008 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 23:02:47.760990 1220008 out.go:304] Setting ErrFile to fd 2...
	I0803 23:02:47.761012 1220008 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 23:02:47.761294 1220008 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19364-1180294/.minikube/bin
	I0803 23:02:47.761839 1220008 out.go:298] Setting JSON to false
	I0803 23:02:47.762905 1220008 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":27913,"bootTime":1722698255,"procs":220,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0803 23:02:47.762998 1220008 start.go:139] virtualization:  
	I0803 23:02:47.767320 1220008 out.go:177] * [functional-851027] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	I0803 23:02:47.770125 1220008 out.go:177]   - MINIKUBE_LOCATION=19364
	I0803 23:02:47.770194 1220008 notify.go:220] Checking for updates...
	I0803 23:02:47.775991 1220008 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0803 23:02:47.778256 1220008 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19364-1180294/kubeconfig
	I0803 23:02:47.780157 1220008 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19364-1180294/.minikube
	I0803 23:02:47.782620 1220008 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0803 23:02:47.785514 1220008 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0803 23:02:47.788072 1220008 config.go:182] Loaded profile config "functional-851027": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.30.3
	I0803 23:02:47.788710 1220008 driver.go:392] Setting default libvirt URI to qemu:///system
	I0803 23:02:47.811332 1220008 docker.go:123] docker version: linux-27.1.1:Docker Engine - Community
	I0803 23:02:47.811468 1220008 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0803 23:02:47.894952 1220008 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:32 OomKillDisable:true NGoroutines:52 SystemTime:2024-08-03 23:02:47.883600569 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1066-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214896640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41 Expected:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1]] Warnings:<nil>}}
	I0803 23:02:47.895067 1220008 docker.go:307] overlay module found
	I0803 23:02:47.899028 1220008 out.go:177] * Using the docker driver based on existing profile
	I0803 23:02:47.900902 1220008 start.go:297] selected driver: docker
	I0803 23:02:47.900927 1220008 start.go:901] validating driver "docker" against &{Name:functional-851027 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:functional-851027 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:do
cker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0803 23:02:47.901054 1220008 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0803 23:02:47.903680 1220008 out.go:177] 
	W0803 23:02:47.906201 1220008 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0803 23:02:47.908147 1220008 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-linux-arm64 start -p functional-851027 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
--- PASS: TestFunctional/parallel/DryRun (0.60s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-linux-arm64 start -p functional-851027 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-851027 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd: exit status 23 (244.891461ms)

                                                
                                                
-- stdout --
	* [functional-851027] minikube v1.33.1 sur Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19364
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19364-1180294/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19364-1180294/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0803 23:02:47.513581 1219937 out.go:291] Setting OutFile to fd 1 ...
	I0803 23:02:47.513919 1219937 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 23:02:47.513925 1219937 out.go:304] Setting ErrFile to fd 2...
	I0803 23:02:47.513930 1219937 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 23:02:47.519554 1219937 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19364-1180294/.minikube/bin
	I0803 23:02:47.520024 1219937 out.go:298] Setting JSON to false
	I0803 23:02:47.521098 1219937 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":27913,"bootTime":1722698255,"procs":220,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0803 23:02:47.521212 1219937 start.go:139] virtualization:  
	I0803 23:02:47.524831 1219937 out.go:177] * [functional-851027] minikube v1.33.1 sur Ubuntu 20.04 (arm64)
	I0803 23:02:47.528004 1219937 out.go:177]   - MINIKUBE_LOCATION=19364
	I0803 23:02:47.528053 1219937 notify.go:220] Checking for updates...
	I0803 23:02:47.532973 1219937 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0803 23:02:47.535090 1219937 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19364-1180294/kubeconfig
	I0803 23:02:47.537421 1219937 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19364-1180294/.minikube
	I0803 23:02:47.539617 1219937 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0803 23:02:47.542122 1219937 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0803 23:02:47.545250 1219937 config.go:182] Loaded profile config "functional-851027": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.30.3
	I0803 23:02:47.545941 1219937 driver.go:392] Setting default libvirt URI to qemu:///system
	I0803 23:02:47.589810 1219937 docker.go:123] docker version: linux-27.1.1:Docker Engine - Community
	I0803 23:02:47.590058 1219937 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0803 23:02:47.669826 1219937 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:32 OomKillDisable:true NGoroutines:52 SystemTime:2024-08-03 23:02:47.660624985 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1066-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214896640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41 Expected:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1]] Warnings:<nil>}}
	I0803 23:02:47.669937 1219937 docker.go:307] overlay module found
	I0803 23:02:47.672195 1219937 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I0803 23:02:47.674026 1219937 start.go:297] selected driver: docker
	I0803 23:02:47.674045 1219937 start.go:901] validating driver "docker" against &{Name:functional-851027 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:functional-851027 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:do
cker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0803 23:02:47.674155 1219937 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0803 23:02:47.676556 1219937 out.go:177] 
	W0803 23:02:47.678605 1219937 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0803 23:02:47.680607 1219937 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-linux-arm64 -p functional-851027 status
functional_test.go:856: (dbg) Run:  out/minikube-linux-arm64 -p functional-851027 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:868: (dbg) Run:  out/minikube-linux-arm64 -p functional-851027 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.31s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (9.72s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1623: (dbg) Run:  kubectl --context functional-851027 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1631: (dbg) Run:  kubectl --context functional-851027 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-6f49f58cd5-7w8rv" [f327163c-3e78-4134-83e2-753e5c84be0e] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-connect-6f49f58cd5-7w8rv" [f327163c-3e78-4134-83e2-753e5c84be0e] Running
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 9.004034027s
functional_test.go:1645: (dbg) Run:  out/minikube-linux-arm64 -p functional-851027 service hello-node-connect --url
functional_test.go:1651: found endpoint for hello-node-connect: http://192.168.49.2:32191
functional_test.go:1671: http://192.168.49.2:32191: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-6f49f58cd5-7w8rv

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.49.2:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.49.2:32191
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (9.72s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1686: (dbg) Run:  out/minikube-linux-arm64 -p functional-851027 addons list
functional_test.go:1698: (dbg) Run:  out/minikube-linux-arm64 -p functional-851027 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (25.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [be71aad2-42b1-4725-84a1-8b5dc6f862d2] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.004469263s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-851027 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-851027 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-851027 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-851027 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [1393fb7e-ce0f-43ea-a38d-0a2d3d024068] Pending
helpers_test.go:344: "sp-pod" [1393fb7e-ce0f-43ea-a38d-0a2d3d024068] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [1393fb7e-ce0f-43ea-a38d-0a2d3d024068] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 11.003873634s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-851027 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-851027 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-851027 delete -f testdata/storage-provisioner/pod.yaml: (1.180855572s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-851027 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [662c7f76-1eda-441c-a2d6-8d5abac2c0a2] Pending
helpers_test.go:344: "sp-pod" [662c7f76-1eda-441c-a2d6-8d5abac2c0a2] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.007431453s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-851027 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (25.23s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1721: (dbg) Run:  out/minikube-linux-arm64 -p functional-851027 ssh "echo hello"
functional_test.go:1738: (dbg) Run:  out/minikube-linux-arm64 -p functional-851027 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.65s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (2.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-851027 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-851027 ssh -n functional-851027 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-851027 cp functional-851027:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd3646296406/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-851027 ssh -n functional-851027 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-851027 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-851027 ssh -n functional-851027 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (2.20s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/1185702/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-linux-arm64 -p functional-851027 ssh "sudo cat /etc/test/nested/copy/1185702/hosts"
functional_test.go:1932: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (2.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/1185702.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-arm64 -p functional-851027 ssh "sudo cat /etc/ssl/certs/1185702.pem"
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/1185702.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-arm64 -p functional-851027 ssh "sudo cat /usr/share/ca-certificates/1185702.pem"
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-arm64 -p functional-851027 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/11857022.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-arm64 -p functional-851027 ssh "sudo cat /etc/ssl/certs/11857022.pem"
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/11857022.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-arm64 -p functional-851027 ssh "sudo cat /usr/share/ca-certificates/11857022.pem"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-arm64 -p functional-851027 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (2.25s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-851027 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.71s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-linux-arm64 -p functional-851027 ssh "sudo systemctl is-active docker"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-851027 ssh "sudo systemctl is-active docker": exit status 1 (396.246611ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2023: (dbg) Run:  out/minikube-linux-arm64 -p functional-851027 ssh "sudo systemctl is-active crio"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-851027 ssh "sudo systemctl is-active crio": exit status 1 (311.925721ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.71s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-linux-arm64 license
--- PASS: TestFunctional/parallel/License (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-851027 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-851027 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-851027 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 1217560: os: process already finished
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-851027 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.64s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-arm64 -p functional-851027 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (8.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-851027 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [a581f765-9ef7-4fb8-a17a-6c31bc9187f9] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [a581f765-9ef7-4fb8-a17a-6c31bc9187f9] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 8.003341584s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (8.45s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-851027 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.98.12.214 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-arm64 -p functional-851027 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (6.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1433: (dbg) Run:  kubectl --context functional-851027 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1441: (dbg) Run:  kubectl --context functional-851027 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-65f5d5cc78-prts7" [d2d45768-5101-49a2-82dd-eb7b052a3a34] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-65f5d5cc78-prts7" [d2d45768-5101-49a2-82dd-eb7b052a3a34] Running
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 6.003519179s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (6.23s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1455: (dbg) Run:  out/minikube-linux-arm64 -p functional-851027 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1485: (dbg) Run:  out/minikube-linux-arm64 -p functional-851027 service list -o json
functional_test.go:1490: Took "526.542522ms" to run "out/minikube-linux-arm64 -p functional-851027 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1266: (dbg) Run:  out/minikube-linux-arm64 profile lis
functional_test.go:1271: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1505: (dbg) Run:  out/minikube-linux-arm64 -p functional-851027 service --namespace=default --https --url hello-node
functional_test.go:1518: found endpoint: https://192.168.49.2:30300
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1306: (dbg) Run:  out/minikube-linux-arm64 profile list
functional_test.go:1311: Took "524.256447ms" to run "out/minikube-linux-arm64 profile list"
functional_test.go:1320: (dbg) Run:  out/minikube-linux-arm64 profile list -l
functional_test.go:1325: Took "98.170731ms" to run "out/minikube-linux-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.62s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1536: (dbg) Run:  out/minikube-linux-arm64 -p functional-851027 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.63s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1357: (dbg) Run:  out/minikube-linux-arm64 profile list -o json
functional_test.go:1362: Took "508.535116ms" to run "out/minikube-linux-arm64 profile list -o json"
functional_test.go:1370: (dbg) Run:  out/minikube-linux-arm64 profile list -o json --light
functional_test.go:1375: Took "95.919094ms" to run "out/minikube-linux-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.60s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1555: (dbg) Run:  out/minikube-linux-arm64 -p functional-851027 service hello-node --url
functional_test.go:1561: found endpoint for hello-node: http://192.168.49.2:30300
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.56s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (8.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-851027 /tmp/TestFunctionalparallelMountCmdany-port426103108/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1722726166074541946" to /tmp/TestFunctionalparallelMountCmdany-port426103108/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1722726166074541946" to /tmp/TestFunctionalparallelMountCmdany-port426103108/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1722726166074541946" to /tmp/TestFunctionalparallelMountCmdany-port426103108/001/test-1722726166074541946
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-851027 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-851027 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (458.481598ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-851027 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-arm64 -p functional-851027 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Aug  3 23:02 created-by-test
-rw-r--r-- 1 docker docker 24 Aug  3 23:02 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Aug  3 23:02 test-1722726166074541946
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-arm64 -p functional-851027 ssh cat /mount-9p/test-1722726166074541946
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-851027 replace --force -f testdata/busybox-mount-test.yaml
E0803 23:02:48.218350 1185702 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-1180294/.minikube/profiles/addons-369401/client.crt: no such file or directory
E0803 23:02:48.224641 1185702 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-1180294/.minikube/profiles/addons-369401/client.crt: no such file or directory
E0803 23:02:48.239598 1185702 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-1180294/.minikube/profiles/addons-369401/client.crt: no such file or directory
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [73252158-c1de-4299-a437-686c68ad6161] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
E0803 23:02:48.259703 1185702 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-1180294/.minikube/profiles/addons-369401/client.crt: no such file or directory
helpers_test.go:344: "busybox-mount" [73252158-c1de-4299-a437-686c68ad6161] Running
E0803 23:02:50.784795 1185702 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-1180294/.minikube/profiles/addons-369401/client.crt: no such file or directory
helpers_test.go:344: "busybox-mount" [73252158-c1de-4299-a437-686c68ad6161] Running / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [73252158-c1de-4299-a437-686c68ad6161] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 5.003488793s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-851027 logs busybox-mount
E0803 23:02:53.344928 1185702 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-1180294/.minikube/profiles/addons-369401/client.crt: no such file or directory
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-851027 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-851027 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-arm64 -p functional-851027 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-851027 /tmp/TestFunctionalparallelMountCmdany-port426103108/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (8.41s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-851027 /tmp/TestFunctionalparallelMountCmdspecific-port2775466102/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-851027 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-851027 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (513.919932ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-851027 ssh "findmnt -T /mount-9p | grep 9p"
2024/08/03 23:02:55 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-arm64 -p functional-851027 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-851027 /tmp/TestFunctionalparallelMountCmdspecific-port2775466102/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-arm64 -p functional-851027 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-851027 ssh "sudo umount -f /mount-9p": exit status 1 (339.864459ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-arm64 -p functional-851027 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-851027 /tmp/TestFunctionalparallelMountCmdspecific-port2775466102/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.00s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-linux-arm64 -p functional-851027 version --short
--- PASS: TestFunctional/parallel/Version/short (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (1.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-linux-arm64 -p functional-851027 version -o=json --components
functional_test.go:2266: (dbg) Done: out/minikube-linux-arm64 -p functional-851027 version -o=json --components: (1.321068641s)
--- PASS: TestFunctional/parallel/Version/components (1.32s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (2.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-851027 /tmp/TestFunctionalparallelMountCmdVerifyCleanup475346389/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-851027 /tmp/TestFunctionalparallelMountCmdVerifyCleanup475346389/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-851027 /tmp/TestFunctionalparallelMountCmdVerifyCleanup475346389/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-851027 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-851027 ssh "findmnt -T" /mount1: exit status 1 (531.036443ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-851027 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-851027 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-851027 ssh "findmnt -T" /mount3
E0803 23:02:58.465476 1185702 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-1180294/.minikube/profiles/addons-369401/client.crt: no such file or directory
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-arm64 mount -p functional-851027 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-851027 /tmp/TestFunctionalparallelMountCmdVerifyCleanup475346389/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-851027 /tmp/TestFunctionalparallelMountCmdVerifyCleanup475346389/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-851027 /tmp/TestFunctionalparallelMountCmdVerifyCleanup475346389/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (2.19s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-851027 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-arm64 -p functional-851027 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.30.3
registry.k8s.io/kube-proxy:v1.30.3
registry.k8s.io/kube-controller-manager:v1.30.3
registry.k8s.io/kube-apiserver:v1.30.3
registry.k8s.io/etcd:3.5.12-0
registry.k8s.io/echoserver-arm:1.8
registry.k8s.io/coredns/coredns:v1.11.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/minikube-local-cache-test:functional-851027
docker.io/kindest/kindnetd:v20240719-e7903573
docker.io/kindest/kindnetd:v20240715-585640e9
docker.io/kicbase/echo-server:functional-851027
functional_test.go:268: (dbg) Stderr: out/minikube-linux-arm64 -p functional-851027 image ls --format short --alsologtostderr:
I0803 23:03:05.235685 1222931 out.go:291] Setting OutFile to fd 1 ...
I0803 23:03:05.235819 1222931 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0803 23:03:05.235873 1222931 out.go:304] Setting ErrFile to fd 2...
I0803 23:03:05.235885 1222931 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0803 23:03:05.236194 1222931 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19364-1180294/.minikube/bin
I0803 23:03:05.236901 1222931 config.go:182] Loaded profile config "functional-851027": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.30.3
I0803 23:03:05.237074 1222931 config.go:182] Loaded profile config "functional-851027": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.30.3
I0803 23:03:05.237628 1222931 cli_runner.go:164] Run: docker container inspect functional-851027 --format={{.State.Status}}
I0803 23:03:05.271273 1222931 ssh_runner.go:195] Run: systemctl --version
I0803 23:03:05.271325 1222931 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-851027
I0803 23:03:05.293639 1222931 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34268 SSHKeyPath:/home/jenkins/minikube-integration/19364-1180294/.minikube/machines/functional-851027/id_rsa Username:docker}
I0803 23:03:05.395984 1222931 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-851027 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-arm64 -p functional-851027 image ls --format table --alsologtostderr:
|---------------------------------------------|--------------------|---------------|--------|
|                    Image                    |        Tag         |   Image ID    |  Size  |
|---------------------------------------------|--------------------|---------------|--------|
| registry.k8s.io/pause                       | 3.3                | sha256:3d1873 | 249kB  |
| docker.io/library/minikube-local-cache-test | functional-851027  | sha256:835d93 | 992B   |
| registry.k8s.io/coredns/coredns             | v1.11.1            | sha256:2437cf | 16.5MB |
| registry.k8s.io/kube-apiserver              | v1.30.3            | sha256:617731 | 29.9MB |
| registry.k8s.io/kube-scheduler              | v1.30.3            | sha256:d48f99 | 17.6MB |
| docker.io/kindest/kindnetd                  | v20240715-585640e9 | sha256:5e3296 | 33.3MB |
| docker.io/kindest/kindnetd                  | v20240719-e7903573 | sha256:f42786 | 33.3MB |
| registry.k8s.io/kube-proxy                  | v1.30.3            | sha256:2351f5 | 25.6MB |
| registry.k8s.io/pause                       | 3.1                | sha256:8057e0 | 262kB  |
| registry.k8s.io/pause                       | 3.9                | sha256:829e9d | 268kB  |
| docker.io/kicbase/echo-server               | functional-851027  | sha256:ce2d2c | 2.17MB |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc       | sha256:1611cd | 1.94MB |
| gcr.io/k8s-minikube/storage-provisioner     | v5                 | sha256:ba04bb | 8.03MB |
| registry.k8s.io/echoserver-arm              | 1.8                | sha256:72565b | 45.3MB |
| registry.k8s.io/kube-controller-manager     | v1.30.3            | sha256:8e97cd | 28.4MB |
| registry.k8s.io/pause                       | latest             | sha256:8cb209 | 71.3kB |
| docker.io/library/nginx                     | alpine             | sha256:d7cd33 | 18.3MB |
| docker.io/library/nginx                     | latest             | sha256:43b17f | 67.6MB |
| registry.k8s.io/etcd                        | 3.5.12-0           | sha256:014faa | 66.2MB |
|---------------------------------------------|--------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-linux-arm64 -p functional-851027 image ls --format table --alsologtostderr:
I0803 23:03:05.862059 1223084 out.go:291] Setting OutFile to fd 1 ...
I0803 23:03:05.862347 1223084 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0803 23:03:05.862378 1223084 out.go:304] Setting ErrFile to fd 2...
I0803 23:03:05.862399 1223084 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0803 23:03:05.862749 1223084 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19364-1180294/.minikube/bin
I0803 23:03:05.863466 1223084 config.go:182] Loaded profile config "functional-851027": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.30.3
I0803 23:03:05.863638 1223084 config.go:182] Loaded profile config "functional-851027": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.30.3
I0803 23:03:05.864194 1223084 cli_runner.go:164] Run: docker container inspect functional-851027 --format={{.State.Status}}
I0803 23:03:05.890376 1223084 ssh_runner.go:195] Run: systemctl --version
I0803 23:03:05.890437 1223084 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-851027
I0803 23:03:05.919538 1223084 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34268 SSHKeyPath:/home/jenkins/minikube-integration/19364-1180294/.minikube/machines/functional-851027/id_rsa Username:docker}
I0803 23:03:06.019326 1223084 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-851027 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-arm64 -p functional-851027 image ls --format json --alsologtostderr:
[{"id":"sha256:43b17fe33c4b4cf8de762123d33e02f2ed0c5e1178002f533d4fb5df1e05fb76","repoDigests":["docker.io/library/nginx@sha256:6af79ae5de407283dcea8b00d5c37ace95441fd58a8b1d2aa1ed93f5511bb18c"],"repoTags":["docker.io/library/nginx:latest"],"size":"67647629"},{"id":"sha256:61773190d42ff0792f3bab2658e80b1c07519170955bb350b153b564ef28f4ca","repoDigests":["registry.k8s.io/kube-apiserver@sha256:a36d558835e48950f6d13b1edbe20605b8dfbc81e088f58221796631e107966c"],"repoTags":["registry.k8s.io/kube-apiserver:v1.30.3"],"size":"29942692"},{"id":"sha256:8e97cdb19e7cc420af7c71de8b5c9ab536bd278758c8c0878c464b833d91b31a","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:eff43da55a29a5e66ec9480f28233d733a6a8433b7a46f6e8c07086fa4ef69b7"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.30.3"],"size":"28374500"},{"id":"sha256:5e32961ddcea3ade65511b2e27f675bbda25305639279f8b708014019e8cebb2","repoDigests":["docker.io/kindest/kindnetd@sha256:88ed2adbc140254762f98fad7f4b16d279117356ebaf95aebf191713c828
a493"],"repoTags":["docker.io/kindest/kindnetd:v20240715-585640e9"],"size":"33290438"},{"id":"sha256:a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"18306114"},{"id":"sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17","repoDigests":[],"repoTags":["docker.io/kicbase/echo-server:functional-851027"],"size":"2173567"},{"id":"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e","repoDigests":["registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097"],"repoTags":["registry.k8s.io/pause:3.9"],"size":"268051"},{"id":"sha256:8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"71300"},{"id":"sha256:f42786f8afd2214fc59fbf9a26531806f562488d4a7d7a31e8b5e9ff6289b800","repoDigests":["docker.io/kind
est/kindnetd@sha256:da8ad203ec15a72c313015e5609db44bfad7c95d8ce63e87ff97c66363b5680a"],"repoTags":["docker.io/kindest/kindnetd:v20240719-e7903573"],"size":"33296266"},{"id":"sha256:8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"262191"},{"id":"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd","repoDigests":["registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b"],"repoTags":["registry.k8s.io/etcd:3.5.12-0"],"size":"66189079"},{"id":"sha256:20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"],"repoTags":[],"size":"74084559"},{"id":"sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e"],"repoTags":["
gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"1935750"},{"id":"sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"8034419"},{"id":"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93","repoDigests":["registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1"],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.1"],"size":"16482581"},{"id":"sha256:72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb","repoDigests":["registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5"],"repoTags":["registry.k8s.io/echoserver-arm:1.8"],"size":"45324675"},{"id":"sha256:2351f570ed0eac5533e538280d73c6aa5d6b6f6379f5f3fac08f51378621e6be","repoDigests":["registry.k8s.io/kube-proxy@sha256:b
26e535e8ee1cbd7dc5642fb61bd36e9d23f32e9242ae0010b2905656e664f65"],"repoTags":["registry.k8s.io/kube-proxy:v1.30.3"],"size":"25645955"},{"id":"sha256:d48f992a22722fc0290769b8fab1186db239bbad4cff837fbb641c55faef9355","repoDigests":["registry.k8s.io/kube-scheduler@sha256:2147ab5d2c73dd84e28332fcbee6826d1648eed30a531a52a96501b37d7ee4e4"],"repoTags":["registry.k8s.io/kube-scheduler:v1.30.3"],"size":"17641143"},{"id":"sha256:835d930d98389bac559ba4065222771d955cc05218d30186aaf64645e1a6af37","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-851027"],"size":"992"},{"id":"sha256:3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"249461"},{"id":"sha256:d7cd33d7d4ed1cdef69594adc36fcc03a0aa45ba930d39a9286024d9b2322660","repoDigests":["docker.io/library/nginx@sha256:208b70eefac13ee9be00e486f79c695b15cef861c680527171a27d253d834be9"],"repoTags":["docker.io/library/nginx:alpine"],"size":"18253575"}]
functional_test.go:268: (dbg) Stderr: out/minikube-linux-arm64 -p functional-851027 image ls --format json --alsologtostderr:
I0803 23:03:05.560214 1222995 out.go:291] Setting OutFile to fd 1 ...
I0803 23:03:05.560618 1222995 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0803 23:03:05.561023 1222995 out.go:304] Setting ErrFile to fd 2...
I0803 23:03:05.561180 1222995 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0803 23:03:05.561538 1222995 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19364-1180294/.minikube/bin
I0803 23:03:05.562203 1222995 config.go:182] Loaded profile config "functional-851027": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.30.3
I0803 23:03:05.562394 1222995 config.go:182] Loaded profile config "functional-851027": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.30.3
I0803 23:03:05.562919 1222995 cli_runner.go:164] Run: docker container inspect functional-851027 --format={{.State.Status}}
I0803 23:03:05.594760 1222995 ssh_runner.go:195] Run: systemctl --version
I0803 23:03:05.594810 1222995 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-851027
I0803 23:03:05.630270 1222995 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34268 SSHKeyPath:/home/jenkins/minikube-integration/19364-1180294/.minikube/machines/functional-851027/id_rsa Username:docker}
I0803 23:03:05.733422 1222995 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-851027 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-arm64 -p functional-851027 image ls --format yaml --alsologtostderr:
- id: sha256:5e32961ddcea3ade65511b2e27f675bbda25305639279f8b708014019e8cebb2
repoDigests:
- docker.io/kindest/kindnetd@sha256:88ed2adbc140254762f98fad7f4b16d279117356ebaf95aebf191713c828a493
repoTags:
- docker.io/kindest/kindnetd:v20240715-585640e9
size: "33290438"
- id: sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "1935750"
- id: sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "8034419"
- id: sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd
repoDigests:
- registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b
repoTags:
- registry.k8s.io/etcd:3.5.12-0
size: "66189079"
- id: sha256:2351f570ed0eac5533e538280d73c6aa5d6b6f6379f5f3fac08f51378621e6be
repoDigests:
- registry.k8s.io/kube-proxy@sha256:b26e535e8ee1cbd7dc5642fb61bd36e9d23f32e9242ae0010b2905656e664f65
repoTags:
- registry.k8s.io/kube-proxy:v1.30.3
size: "25645955"
- id: sha256:8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "262191"
- id: sha256:3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "249461"
- id: sha256:a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "18306114"
- id: sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.1
size: "16482581"
- id: sha256:72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb
repoDigests:
- registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5
repoTags:
- registry.k8s.io/echoserver-arm:1.8
size: "45324675"
- id: sha256:d48f992a22722fc0290769b8fab1186db239bbad4cff837fbb641c55faef9355
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:2147ab5d2c73dd84e28332fcbee6826d1648eed30a531a52a96501b37d7ee4e4
repoTags:
- registry.k8s.io/kube-scheduler:v1.30.3
size: "17641143"
- id: sha256:f42786f8afd2214fc59fbf9a26531806f562488d4a7d7a31e8b5e9ff6289b800
repoDigests:
- docker.io/kindest/kindnetd@sha256:da8ad203ec15a72c313015e5609db44bfad7c95d8ce63e87ff97c66363b5680a
repoTags:
- docker.io/kindest/kindnetd:v20240719-e7903573
size: "33296266"
- id: sha256:20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
repoTags: []
size: "74084559"
- id: sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17
repoDigests: []
repoTags:
- docker.io/kicbase/echo-server:functional-851027
size: "2173567"
- id: sha256:835d930d98389bac559ba4065222771d955cc05218d30186aaf64645e1a6af37
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-851027
size: "992"
- id: sha256:61773190d42ff0792f3bab2658e80b1c07519170955bb350b153b564ef28f4ca
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:a36d558835e48950f6d13b1edbe20605b8dfbc81e088f58221796631e107966c
repoTags:
- registry.k8s.io/kube-apiserver:v1.30.3
size: "29942692"
- id: sha256:8e97cdb19e7cc420af7c71de8b5c9ab536bd278758c8c0878c464b833d91b31a
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:eff43da55a29a5e66ec9480f28233d733a6a8433b7a46f6e8c07086fa4ef69b7
repoTags:
- registry.k8s.io/kube-controller-manager:v1.30.3
size: "28374500"
- id: sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e
repoDigests:
- registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097
repoTags:
- registry.k8s.io/pause:3.9
size: "268051"
- id: sha256:d7cd33d7d4ed1cdef69594adc36fcc03a0aa45ba930d39a9286024d9b2322660
repoDigests:
- docker.io/library/nginx@sha256:208b70eefac13ee9be00e486f79c695b15cef861c680527171a27d253d834be9
repoTags:
- docker.io/library/nginx:alpine
size: "18253575"
- id: sha256:43b17fe33c4b4cf8de762123d33e02f2ed0c5e1178002f533d4fb5df1e05fb76
repoDigests:
- docker.io/library/nginx@sha256:6af79ae5de407283dcea8b00d5c37ace95441fd58a8b1d2aa1ed93f5511bb18c
repoTags:
- docker.io/library/nginx:latest
size: "67647629"
- id: sha256:8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "71300"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-linux-arm64 -p functional-851027 image ls --format yaml --alsologtostderr:
I0803 23:03:05.235346 1222932 out.go:291] Setting OutFile to fd 1 ...
I0803 23:03:05.235595 1222932 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0803 23:03:05.235627 1222932 out.go:304] Setting ErrFile to fd 2...
I0803 23:03:05.235648 1222932 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0803 23:03:05.235997 1222932 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19364-1180294/.minikube/bin
I0803 23:03:05.236711 1222932 config.go:182] Loaded profile config "functional-851027": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.30.3
I0803 23:03:05.236893 1222932 config.go:182] Loaded profile config "functional-851027": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.30.3
I0803 23:03:05.237433 1222932 cli_runner.go:164] Run: docker container inspect functional-851027 --format={{.State.Status}}
I0803 23:03:05.257056 1222932 ssh_runner.go:195] Run: systemctl --version
I0803 23:03:05.257111 1222932 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-851027
I0803 23:03:05.277772 1222932 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34268 SSHKeyPath:/home/jenkins/minikube-integration/19364-1180294/.minikube/machines/functional-851027/id_rsa Username:docker}
I0803 23:03:05.381208 1222932 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (2.88s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-linux-arm64 -p functional-851027 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-851027 ssh pgrep buildkitd: exit status 1 (348.604908ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-linux-arm64 -p functional-851027 image build -t localhost/my-image:functional-851027 testdata/build --alsologtostderr
functional_test.go:314: (dbg) Done: out/minikube-linux-arm64 -p functional-851027 image build -t localhost/my-image:functional-851027 testdata/build --alsologtostderr: (2.29764603s)
functional_test.go:322: (dbg) Stderr: out/minikube-linux-arm64 -p functional-851027 image build -t localhost/my-image:functional-851027 testdata/build --alsologtostderr:
I0803 23:03:05.876429 1223090 out.go:291] Setting OutFile to fd 1 ...
I0803 23:03:05.877385 1223090 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0803 23:03:05.877399 1223090 out.go:304] Setting ErrFile to fd 2...
I0803 23:03:05.877405 1223090 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0803 23:03:05.877879 1223090 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19364-1180294/.minikube/bin
I0803 23:03:05.879676 1223090 config.go:182] Loaded profile config "functional-851027": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.30.3
I0803 23:03:05.881153 1223090 config.go:182] Loaded profile config "functional-851027": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.30.3
I0803 23:03:05.881644 1223090 cli_runner.go:164] Run: docker container inspect functional-851027 --format={{.State.Status}}
I0803 23:03:05.901353 1223090 ssh_runner.go:195] Run: systemctl --version
I0803 23:03:05.901415 1223090 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-851027
I0803 23:03:05.924940 1223090 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34268 SSHKeyPath:/home/jenkins/minikube-integration/19364-1180294/.minikube/machines/functional-851027/id_rsa Username:docker}
I0803 23:03:06.030490 1223090 build_images.go:161] Building image from path: /tmp/build.4226795404.tar
I0803 23:03:06.030579 1223090 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0803 23:03:06.064759 1223090 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.4226795404.tar
I0803 23:03:06.070380 1223090 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.4226795404.tar: stat -c "%s %y" /var/lib/minikube/build/build.4226795404.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.4226795404.tar': No such file or directory
I0803 23:03:06.070410 1223090 ssh_runner.go:362] scp /tmp/build.4226795404.tar --> /var/lib/minikube/build/build.4226795404.tar (3072 bytes)
I0803 23:03:06.106511 1223090 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.4226795404
I0803 23:03:06.115851 1223090 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.4226795404 -xf /var/lib/minikube/build/build.4226795404.tar
I0803 23:03:06.125725 1223090 containerd.go:394] Building image: /var/lib/minikube/build/build.4226795404
I0803 23:03:06.125800 1223090 ssh_runner.go:195] Run: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.4226795404 --local dockerfile=/var/lib/minikube/build/build.4226795404 --output type=image,name=localhost/my-image:functional-851027
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 0.7s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.0s done
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 828.50kB / 828.50kB 0.1s done
#5 extracting sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0.1s done
#5 DONE 0.3s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.5s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.0s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.1s done
#8 exporting manifest sha256:99d5981d8636ccade9f8edc744136f4232b3a240aba854b6bb0815b97f36cc27
#8 exporting manifest sha256:99d5981d8636ccade9f8edc744136f4232b3a240aba854b6bb0815b97f36cc27 0.0s done
#8 exporting config sha256:9586fafa3321e1c8bf218a84cbd0969604225eb382d079c3d57e7111dba964a6 0.0s done
#8 naming to localhost/my-image:functional-851027 done
#8 DONE 0.1s
I0803 23:03:08.081820 1223090 ssh_runner.go:235] Completed: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.4226795404 --local dockerfile=/var/lib/minikube/build/build.4226795404 --output type=image,name=localhost/my-image:functional-851027: (1.955990179s)
I0803 23:03:08.081892 1223090 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.4226795404
I0803 23:03:08.092382 1223090 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.4226795404.tar
I0803 23:03:08.102600 1223090 build_images.go:217] Built localhost/my-image:functional-851027 from /tmp/build.4226795404.tar
I0803 23:03:08.102632 1223090 build_images.go:133] succeeded building to: functional-851027
I0803 23:03:08.102638 1223090 build_images.go:134] failed building to: 
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-851027 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (2.88s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.8s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull docker.io/kicbase/echo-server:1.0
functional_test.go:346: (dbg) Run:  docker tag docker.io/kicbase/echo-server:1.0 docker.io/kicbase/echo-server:functional-851027
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.80s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.73s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-linux-arm64 -p functional-851027 image load --daemon docker.io/kicbase/echo-server:functional-851027 --alsologtostderr
functional_test.go:354: (dbg) Done: out/minikube-linux-arm64 -p functional-851027 image load --daemon docker.io/kicbase/echo-server:functional-851027 --alsologtostderr: (1.450677236s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-851027 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.73s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-linux-arm64 -p functional-851027 image load --daemon docker.io/kicbase/echo-server:functional-851027 --alsologtostderr
functional_test.go:364: (dbg) Done: out/minikube-linux-arm64 -p functional-851027 image load --daemon docker.io/kicbase/echo-server:functional-851027 --alsologtostderr: (1.304248252s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-851027 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.59s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-linux-arm64 -p functional-851027 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-linux-arm64 -p functional-851027 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-linux-arm64 -p functional-851027 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull docker.io/kicbase/echo-server:latest
functional_test.go:239: (dbg) Run:  docker tag docker.io/kicbase/echo-server:latest docker.io/kicbase/echo-server:functional-851027
functional_test.go:244: (dbg) Run:  out/minikube-linux-arm64 -p functional-851027 image load --daemon docker.io/kicbase/echo-server:functional-851027 --alsologtostderr
functional_test.go:244: (dbg) Done: out/minikube-linux-arm64 -p functional-851027 image load --daemon docker.io/kicbase/echo-server:functional-851027 --alsologtostderr: (1.125439963s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-851027 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.68s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.88s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-linux-arm64 -p functional-851027 image save docker.io/kicbase/echo-server:functional-851027 /home/jenkins/workspace/Docker_Linux_containerd_arm64/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.88s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-linux-arm64 -p functional-851027 image rm docker.io/kicbase/echo-server:functional-851027 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-851027 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-linux-arm64 -p functional-851027 image load /home/jenkins/workspace/Docker_Linux_containerd_arm64/echo-server-save.tar --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-851027 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.66s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.75s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi docker.io/kicbase/echo-server:functional-851027
functional_test.go:423: (dbg) Run:  out/minikube-linux-arm64 -p functional-851027 image save --daemon docker.io/kicbase/echo-server:functional-851027 --alsologtostderr
functional_test.go:428: (dbg) Run:  docker image inspect docker.io/kicbase/echo-server:functional-851027
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.75s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:189: (dbg) Run:  docker rmi -f docker.io/kicbase/echo-server:1.0
functional_test.go:189: (dbg) Run:  docker rmi -f docker.io/kicbase/echo-server:functional-851027
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-851027
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-851027
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (122.79s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-arm64 start -p ha-739823 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=containerd
E0803 23:03:29.186267 1185702 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-1180294/.minikube/profiles/addons-369401/client.crt: no such file or directory
E0803 23:04:10.146481 1185702 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-1180294/.minikube/profiles/addons-369401/client.crt: no such file or directory
ha_test.go:101: (dbg) Done: out/minikube-linux-arm64 start -p ha-739823 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=containerd: (2m1.826167099s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-arm64 -p ha-739823 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (122.79s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (31.32s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-739823 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-739823 -- rollout status deployment/busybox
E0803 23:05:32.067417 1185702 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-1180294/.minikube/profiles/addons-369401/client.crt: no such file or directory
ha_test.go:133: (dbg) Done: out/minikube-linux-arm64 kubectl -p ha-739823 -- rollout status deployment/busybox: (28.261311153s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-739823 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-739823 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-739823 -- exec busybox-fc5497c4f-jcvfh -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-739823 -- exec busybox-fc5497c4f-psb82 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-739823 -- exec busybox-fc5497c4f-t58c9 -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-739823 -- exec busybox-fc5497c4f-jcvfh -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-739823 -- exec busybox-fc5497c4f-psb82 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-739823 -- exec busybox-fc5497c4f-t58c9 -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-739823 -- exec busybox-fc5497c4f-jcvfh -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-739823 -- exec busybox-fc5497c4f-psb82 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-739823 -- exec busybox-fc5497c4f-t58c9 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (31.32s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.58s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-739823 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-739823 -- exec busybox-fc5497c4f-jcvfh -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-739823 -- exec busybox-fc5497c4f-jcvfh -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-739823 -- exec busybox-fc5497c4f-psb82 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-739823 -- exec busybox-fc5497c4f-psb82 -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-739823 -- exec busybox-fc5497c4f-t58c9 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-739823 -- exec busybox-fc5497c4f-t58c9 -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.58s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (22.67s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-arm64 node add -p ha-739823 -v=7 --alsologtostderr
ha_test.go:228: (dbg) Done: out/minikube-linux-arm64 node add -p ha-739823 -v=7 --alsologtostderr: (21.67142003s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-arm64 -p ha-739823 status -v=7 --alsologtostderr
ha_test.go:234: (dbg) Done: out/minikube-linux-arm64 -p ha-739823 status -v=7 --alsologtostderr: (1.002229625s)
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (22.67s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.12s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-739823 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.12s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.78s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.78s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (19.01s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-linux-arm64 -p ha-739823 status --output json -v=7 --alsologtostderr
ha_test.go:326: (dbg) Done: out/minikube-linux-arm64 -p ha-739823 status --output json -v=7 --alsologtostderr: (1.020202675s)
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-739823 cp testdata/cp-test.txt ha-739823:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-739823 ssh -n ha-739823 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-739823 cp ha-739823:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile859361090/001/cp-test_ha-739823.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-739823 ssh -n ha-739823 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-739823 cp ha-739823:/home/docker/cp-test.txt ha-739823-m02:/home/docker/cp-test_ha-739823_ha-739823-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-739823 ssh -n ha-739823 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-739823 ssh -n ha-739823-m02 "sudo cat /home/docker/cp-test_ha-739823_ha-739823-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-739823 cp ha-739823:/home/docker/cp-test.txt ha-739823-m03:/home/docker/cp-test_ha-739823_ha-739823-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-739823 ssh -n ha-739823 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-739823 ssh -n ha-739823-m03 "sudo cat /home/docker/cp-test_ha-739823_ha-739823-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-739823 cp ha-739823:/home/docker/cp-test.txt ha-739823-m04:/home/docker/cp-test_ha-739823_ha-739823-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-739823 ssh -n ha-739823 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-739823 ssh -n ha-739823-m04 "sudo cat /home/docker/cp-test_ha-739823_ha-739823-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-739823 cp testdata/cp-test.txt ha-739823-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-739823 ssh -n ha-739823-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-739823 cp ha-739823-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile859361090/001/cp-test_ha-739823-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-739823 ssh -n ha-739823-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-739823 cp ha-739823-m02:/home/docker/cp-test.txt ha-739823:/home/docker/cp-test_ha-739823-m02_ha-739823.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-739823 ssh -n ha-739823-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-739823 ssh -n ha-739823 "sudo cat /home/docker/cp-test_ha-739823-m02_ha-739823.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-739823 cp ha-739823-m02:/home/docker/cp-test.txt ha-739823-m03:/home/docker/cp-test_ha-739823-m02_ha-739823-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-739823 ssh -n ha-739823-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-739823 ssh -n ha-739823-m03 "sudo cat /home/docker/cp-test_ha-739823-m02_ha-739823-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-739823 cp ha-739823-m02:/home/docker/cp-test.txt ha-739823-m04:/home/docker/cp-test_ha-739823-m02_ha-739823-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-739823 ssh -n ha-739823-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-739823 ssh -n ha-739823-m04 "sudo cat /home/docker/cp-test_ha-739823-m02_ha-739823-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-739823 cp testdata/cp-test.txt ha-739823-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-739823 ssh -n ha-739823-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-739823 cp ha-739823-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile859361090/001/cp-test_ha-739823-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-739823 ssh -n ha-739823-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-739823 cp ha-739823-m03:/home/docker/cp-test.txt ha-739823:/home/docker/cp-test_ha-739823-m03_ha-739823.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-739823 ssh -n ha-739823-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-739823 ssh -n ha-739823 "sudo cat /home/docker/cp-test_ha-739823-m03_ha-739823.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-739823 cp ha-739823-m03:/home/docker/cp-test.txt ha-739823-m02:/home/docker/cp-test_ha-739823-m03_ha-739823-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-739823 ssh -n ha-739823-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-739823 ssh -n ha-739823-m02 "sudo cat /home/docker/cp-test_ha-739823-m03_ha-739823-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-739823 cp ha-739823-m03:/home/docker/cp-test.txt ha-739823-m04:/home/docker/cp-test_ha-739823-m03_ha-739823-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-739823 ssh -n ha-739823-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-739823 ssh -n ha-739823-m04 "sudo cat /home/docker/cp-test_ha-739823-m03_ha-739823-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-739823 cp testdata/cp-test.txt ha-739823-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-739823 ssh -n ha-739823-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-739823 cp ha-739823-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile859361090/001/cp-test_ha-739823-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-739823 ssh -n ha-739823-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-739823 cp ha-739823-m04:/home/docker/cp-test.txt ha-739823:/home/docker/cp-test_ha-739823-m04_ha-739823.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-739823 ssh -n ha-739823-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-739823 ssh -n ha-739823 "sudo cat /home/docker/cp-test_ha-739823-m04_ha-739823.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-739823 cp ha-739823-m04:/home/docker/cp-test.txt ha-739823-m02:/home/docker/cp-test_ha-739823-m04_ha-739823-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-739823 ssh -n ha-739823-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-739823 ssh -n ha-739823-m02 "sudo cat /home/docker/cp-test_ha-739823-m04_ha-739823-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-739823 cp ha-739823-m04:/home/docker/cp-test.txt ha-739823-m03:/home/docker/cp-test_ha-739823-m04_ha-739823-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-739823 ssh -n ha-739823-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-739823 ssh -n ha-739823-m03 "sudo cat /home/docker/cp-test_ha-739823-m04_ha-739823-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (19.01s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (12.91s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-linux-arm64 -p ha-739823 node stop m02 -v=7 --alsologtostderr
ha_test.go:363: (dbg) Done: out/minikube-linux-arm64 -p ha-739823 node stop m02 -v=7 --alsologtostderr: (12.092294699s)
ha_test.go:369: (dbg) Run:  out/minikube-linux-arm64 -p ha-739823 status -v=7 --alsologtostderr
ha_test.go:369: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-739823 status -v=7 --alsologtostderr: exit status 7 (819.193104ms)

                                                
                                                
-- stdout --
	ha-739823
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-739823-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-739823-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-739823-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0803 23:06:41.544108 1239598 out.go:291] Setting OutFile to fd 1 ...
	I0803 23:06:41.544302 1239598 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 23:06:41.544332 1239598 out.go:304] Setting ErrFile to fd 2...
	I0803 23:06:41.544357 1239598 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 23:06:41.544613 1239598 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19364-1180294/.minikube/bin
	I0803 23:06:41.544867 1239598 out.go:298] Setting JSON to false
	I0803 23:06:41.544951 1239598 mustload.go:65] Loading cluster: ha-739823
	I0803 23:06:41.545007 1239598 notify.go:220] Checking for updates...
	I0803 23:06:41.546091 1239598 config.go:182] Loaded profile config "ha-739823": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.30.3
	I0803 23:06:41.546133 1239598 status.go:255] checking status of ha-739823 ...
	I0803 23:06:41.546652 1239598 cli_runner.go:164] Run: docker container inspect ha-739823 --format={{.State.Status}}
	I0803 23:06:41.566504 1239598 status.go:330] ha-739823 host status = "Running" (err=<nil>)
	I0803 23:06:41.566527 1239598 host.go:66] Checking if "ha-739823" exists ...
	I0803 23:06:41.566808 1239598 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-739823
	I0803 23:06:41.586703 1239598 host.go:66] Checking if "ha-739823" exists ...
	I0803 23:06:41.587013 1239598 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0803 23:06:41.587148 1239598 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-739823
	I0803 23:06:41.627444 1239598 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34273 SSHKeyPath:/home/jenkins/minikube-integration/19364-1180294/.minikube/machines/ha-739823/id_rsa Username:docker}
	I0803 23:06:41.726176 1239598 ssh_runner.go:195] Run: systemctl --version
	I0803 23:06:41.731044 1239598 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0803 23:06:41.742445 1239598 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0803 23:06:41.799279 1239598 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:55 OomKillDisable:true NGoroutines:71 SystemTime:2024-08-03 23:06:41.786302048 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1066-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214896640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41 Expected:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1]] Warnings:<nil>}}
	I0803 23:06:41.799984 1239598 kubeconfig.go:125] found "ha-739823" server: "https://192.168.49.254:8443"
	I0803 23:06:41.800021 1239598 api_server.go:166] Checking apiserver status ...
	I0803 23:06:41.800071 1239598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0803 23:06:41.812980 1239598 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1533/cgroup
	I0803 23:06:41.822930 1239598 api_server.go:182] apiserver freezer: "4:freezer:/docker/777ba077107cca20a944d2308555639aee9c29ad44aaa23af57e91e6d286a0d0/kubepods/burstable/pod85c976d8af413e677bbcd928bf80511a/eaa06a07f98e0d55d7b22b3a67b250fd34c8d72bbe04a2cdbb2e64763908ce07"
	I0803 23:06:41.823011 1239598 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/777ba077107cca20a944d2308555639aee9c29ad44aaa23af57e91e6d286a0d0/kubepods/burstable/pod85c976d8af413e677bbcd928bf80511a/eaa06a07f98e0d55d7b22b3a67b250fd34c8d72bbe04a2cdbb2e64763908ce07/freezer.state
	I0803 23:06:41.836342 1239598 api_server.go:204] freezer state: "THAWED"
	I0803 23:06:41.836370 1239598 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0803 23:06:41.845709 1239598 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0803 23:06:41.845739 1239598 status.go:422] ha-739823 apiserver status = Running (err=<nil>)
	I0803 23:06:41.845751 1239598 status.go:257] ha-739823 status: &{Name:ha-739823 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0803 23:06:41.845768 1239598 status.go:255] checking status of ha-739823-m02 ...
	I0803 23:06:41.846073 1239598 cli_runner.go:164] Run: docker container inspect ha-739823-m02 --format={{.State.Status}}
	I0803 23:06:41.864086 1239598 status.go:330] ha-739823-m02 host status = "Stopped" (err=<nil>)
	I0803 23:06:41.864108 1239598 status.go:343] host is not running, skipping remaining checks
	I0803 23:06:41.864134 1239598 status.go:257] ha-739823-m02 status: &{Name:ha-739823-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0803 23:06:41.864159 1239598 status.go:255] checking status of ha-739823-m03 ...
	I0803 23:06:41.864494 1239598 cli_runner.go:164] Run: docker container inspect ha-739823-m03 --format={{.State.Status}}
	I0803 23:06:41.881940 1239598 status.go:330] ha-739823-m03 host status = "Running" (err=<nil>)
	I0803 23:06:41.881967 1239598 host.go:66] Checking if "ha-739823-m03" exists ...
	I0803 23:06:41.882275 1239598 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-739823-m03
	I0803 23:06:41.901967 1239598 host.go:66] Checking if "ha-739823-m03" exists ...
	I0803 23:06:41.902296 1239598 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0803 23:06:41.902340 1239598 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-739823-m03
	I0803 23:06:41.921566 1239598 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34283 SSHKeyPath:/home/jenkins/minikube-integration/19364-1180294/.minikube/machines/ha-739823-m03/id_rsa Username:docker}
	I0803 23:06:42.018384 1239598 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0803 23:06:42.036678 1239598 kubeconfig.go:125] found "ha-739823" server: "https://192.168.49.254:8443"
	I0803 23:06:42.036811 1239598 api_server.go:166] Checking apiserver status ...
	I0803 23:06:42.036887 1239598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0803 23:06:42.050334 1239598 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1427/cgroup
	I0803 23:06:42.061896 1239598 api_server.go:182] apiserver freezer: "4:freezer:/docker/50894ca36a269f1e00ff1687aa5341290402246e90f9fb2bf3ead84835edc97c/kubepods/burstable/pod56d5545a93f327f59b07d1826aa4a8e7/12f822f2523f94e8154a57d852b486b33a29663ea4612a5ca7b563bc1981a91d"
	I0803 23:06:42.062027 1239598 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/50894ca36a269f1e00ff1687aa5341290402246e90f9fb2bf3ead84835edc97c/kubepods/burstable/pod56d5545a93f327f59b07d1826aa4a8e7/12f822f2523f94e8154a57d852b486b33a29663ea4612a5ca7b563bc1981a91d/freezer.state
	I0803 23:06:42.073980 1239598 api_server.go:204] freezer state: "THAWED"
	I0803 23:06:42.074054 1239598 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0803 23:06:42.082488 1239598 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0803 23:06:42.082524 1239598 status.go:422] ha-739823-m03 apiserver status = Running (err=<nil>)
	I0803 23:06:42.082535 1239598 status.go:257] ha-739823-m03 status: &{Name:ha-739823-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0803 23:06:42.082581 1239598 status.go:255] checking status of ha-739823-m04 ...
	I0803 23:06:42.082943 1239598 cli_runner.go:164] Run: docker container inspect ha-739823-m04 --format={{.State.Status}}
	I0803 23:06:42.111826 1239598 status.go:330] ha-739823-m04 host status = "Running" (err=<nil>)
	I0803 23:06:42.111854 1239598 host.go:66] Checking if "ha-739823-m04" exists ...
	I0803 23:06:42.112171 1239598 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-739823-m04
	I0803 23:06:42.146000 1239598 host.go:66] Checking if "ha-739823-m04" exists ...
	I0803 23:06:42.148494 1239598 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0803 23:06:42.148574 1239598 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-739823-m04
	I0803 23:06:42.177703 1239598 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34288 SSHKeyPath:/home/jenkins/minikube-integration/19364-1180294/.minikube/machines/ha-739823-m04/id_rsa Username:docker}
	I0803 23:06:42.274468 1239598 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0803 23:06:42.290172 1239598 status.go:257] ha-739823-m04 status: &{Name:ha-739823-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (12.91s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.6s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.60s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (18.59s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-linux-arm64 -p ha-739823 node start m02 -v=7 --alsologtostderr
ha_test.go:420: (dbg) Done: out/minikube-linux-arm64 -p ha-739823 node start m02 -v=7 --alsologtostderr: (17.273202557s)
ha_test.go:428: (dbg) Run:  out/minikube-linux-arm64 -p ha-739823 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Done: out/minikube-linux-arm64 -p ha-739823 status -v=7 --alsologtostderr: (1.204193375s)
ha_test.go:448: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (18.59s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.79s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.79s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (138.29s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-linux-arm64 node list -p ha-739823 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-linux-arm64 stop -p ha-739823 -v=7 --alsologtostderr
E0803 23:07:18.644013 1185702 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-1180294/.minikube/profiles/functional-851027/client.crt: no such file or directory
E0803 23:07:18.649331 1185702 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-1180294/.minikube/profiles/functional-851027/client.crt: no such file or directory
E0803 23:07:18.659647 1185702 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-1180294/.minikube/profiles/functional-851027/client.crt: no such file or directory
E0803 23:07:18.679918 1185702 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-1180294/.minikube/profiles/functional-851027/client.crt: no such file or directory
E0803 23:07:18.720175 1185702 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-1180294/.minikube/profiles/functional-851027/client.crt: no such file or directory
E0803 23:07:18.800553 1185702 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-1180294/.minikube/profiles/functional-851027/client.crt: no such file or directory
E0803 23:07:18.960921 1185702 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-1180294/.minikube/profiles/functional-851027/client.crt: no such file or directory
E0803 23:07:19.281604 1185702 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-1180294/.minikube/profiles/functional-851027/client.crt: no such file or directory
E0803 23:07:19.922630 1185702 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-1180294/.minikube/profiles/functional-851027/client.crt: no such file or directory
E0803 23:07:21.203068 1185702 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-1180294/.minikube/profiles/functional-851027/client.crt: no such file or directory
E0803 23:07:23.764797 1185702 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-1180294/.minikube/profiles/functional-851027/client.crt: no such file or directory
E0803 23:07:28.885324 1185702 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-1180294/.minikube/profiles/functional-851027/client.crt: no such file or directory
E0803 23:07:39.125601 1185702 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-1180294/.minikube/profiles/functional-851027/client.crt: no such file or directory
ha_test.go:462: (dbg) Done: out/minikube-linux-arm64 stop -p ha-739823 -v=7 --alsologtostderr: (37.197801038s)
ha_test.go:467: (dbg) Run:  out/minikube-linux-arm64 start -p ha-739823 --wait=true -v=7 --alsologtostderr
E0803 23:07:48.212884 1185702 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-1180294/.minikube/profiles/addons-369401/client.crt: no such file or directory
E0803 23:07:59.606282 1185702 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-1180294/.minikube/profiles/functional-851027/client.crt: no such file or directory
E0803 23:08:15.908344 1185702 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-1180294/.minikube/profiles/addons-369401/client.crt: no such file or directory
E0803 23:08:40.567212 1185702 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-1180294/.minikube/profiles/functional-851027/client.crt: no such file or directory
ha_test.go:467: (dbg) Done: out/minikube-linux-arm64 start -p ha-739823 --wait=true -v=7 --alsologtostderr: (1m40.920654769s)
ha_test.go:472: (dbg) Run:  out/minikube-linux-arm64 node list -p ha-739823
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (138.29s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (11.44s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-linux-arm64 -p ha-739823 node delete m03 -v=7 --alsologtostderr
ha_test.go:487: (dbg) Done: out/minikube-linux-arm64 -p ha-739823 node delete m03 -v=7 --alsologtostderr: (10.433062672s)
ha_test.go:493: (dbg) Run:  out/minikube-linux-arm64 -p ha-739823 status -v=7 --alsologtostderr
ha_test.go:511: (dbg) Run:  kubectl get nodes
ha_test.go:519: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (11.44s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.54s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.54s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (36.04s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-linux-arm64 -p ha-739823 stop -v=7 --alsologtostderr
E0803 23:10:02.487496 1185702 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-1180294/.minikube/profiles/functional-851027/client.crt: no such file or directory
ha_test.go:531: (dbg) Done: out/minikube-linux-arm64 -p ha-739823 stop -v=7 --alsologtostderr: (35.921760395s)
ha_test.go:537: (dbg) Run:  out/minikube-linux-arm64 -p ha-739823 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-739823 status -v=7 --alsologtostderr: exit status 7 (117.361313ms)

                                                
                                                
-- stdout --
	ha-739823
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-739823-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-739823-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0803 23:10:08.507632 1254011 out.go:291] Setting OutFile to fd 1 ...
	I0803 23:10:08.507814 1254011 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 23:10:08.507827 1254011 out.go:304] Setting ErrFile to fd 2...
	I0803 23:10:08.507834 1254011 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 23:10:08.508090 1254011 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19364-1180294/.minikube/bin
	I0803 23:10:08.508306 1254011 out.go:298] Setting JSON to false
	I0803 23:10:08.508359 1254011 mustload.go:65] Loading cluster: ha-739823
	I0803 23:10:08.508410 1254011 notify.go:220] Checking for updates...
	I0803 23:10:08.508878 1254011 config.go:182] Loaded profile config "ha-739823": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.30.3
	I0803 23:10:08.508897 1254011 status.go:255] checking status of ha-739823 ...
	I0803 23:10:08.509417 1254011 cli_runner.go:164] Run: docker container inspect ha-739823 --format={{.State.Status}}
	I0803 23:10:08.528129 1254011 status.go:330] ha-739823 host status = "Stopped" (err=<nil>)
	I0803 23:10:08.528156 1254011 status.go:343] host is not running, skipping remaining checks
	I0803 23:10:08.528164 1254011 status.go:257] ha-739823 status: &{Name:ha-739823 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0803 23:10:08.528196 1254011 status.go:255] checking status of ha-739823-m02 ...
	I0803 23:10:08.528510 1254011 cli_runner.go:164] Run: docker container inspect ha-739823-m02 --format={{.State.Status}}
	I0803 23:10:08.546094 1254011 status.go:330] ha-739823-m02 host status = "Stopped" (err=<nil>)
	I0803 23:10:08.546118 1254011 status.go:343] host is not running, skipping remaining checks
	I0803 23:10:08.546125 1254011 status.go:257] ha-739823-m02 status: &{Name:ha-739823-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0803 23:10:08.546145 1254011 status.go:255] checking status of ha-739823-m04 ...
	I0803 23:10:08.546473 1254011 cli_runner.go:164] Run: docker container inspect ha-739823-m04 --format={{.State.Status}}
	I0803 23:10:08.573793 1254011 status.go:330] ha-739823-m04 host status = "Stopped" (err=<nil>)
	I0803 23:10:08.573813 1254011 status.go:343] host is not running, skipping remaining checks
	I0803 23:10:08.573829 1254011 status.go:257] ha-739823-m04 status: &{Name:ha-739823-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (36.04s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (73.58s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-linux-arm64 start -p ha-739823 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=containerd
ha_test.go:560: (dbg) Done: out/minikube-linux-arm64 start -p ha-739823 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=containerd: (1m12.618652569s)
ha_test.go:566: (dbg) Run:  out/minikube-linux-arm64 -p ha-739823 status -v=7 --alsologtostderr
ha_test.go:584: (dbg) Run:  kubectl get nodes
ha_test.go:592: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (73.58s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.54s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.54s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (44.1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-linux-arm64 node add -p ha-739823 --control-plane -v=7 --alsologtostderr
ha_test.go:605: (dbg) Done: out/minikube-linux-arm64 node add -p ha-739823 --control-plane -v=7 --alsologtostderr: (43.065763983s)
ha_test.go:611: (dbg) Run:  out/minikube-linux-arm64 -p ha-739823 status -v=7 --alsologtostderr
ha_test.go:611: (dbg) Done: out/minikube-linux-arm64 -p ha-739823 status -v=7 --alsologtostderr: (1.03390586s)
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (44.10s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.75s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.75s)

                                                
                                    
x
+
TestJSONOutput/start/Command (69.83s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-285216 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=containerd
E0803 23:12:18.644290 1185702 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-1180294/.minikube/profiles/functional-851027/client.crt: no such file or directory
E0803 23:12:46.327716 1185702 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-1180294/.minikube/profiles/functional-851027/client.crt: no such file or directory
E0803 23:12:48.212641 1185702 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-1180294/.minikube/profiles/addons-369401/client.crt: no such file or directory
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 start -p json-output-285216 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=containerd: (1m9.818759949s)
--- PASS: TestJSONOutput/start/Command (69.83s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.73s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 pause -p json-output-285216 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.73s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.64s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 unpause -p json-output-285216 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.64s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.78s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 stop -p json-output-285216 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 stop -p json-output-285216 --output=json --user=testUser: (5.783069599s)
--- PASS: TestJSONOutput/stop/Command (5.78s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.22s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-error-390183 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p json-output-error-390183 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (74.823649ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"09907b77-576a-4a93-ae07-8e8653bd89ad","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-390183] minikube v1.33.1 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"5b00eb64-343f-4e46-84a3-ca87178f9cfe","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19364"}}
	{"specversion":"1.0","id":"40051d03-df0f-4b35-9f5e-8a240469cfdb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"80d474fd-275d-4a17-aaa6-f3d832ee5581","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19364-1180294/kubeconfig"}}
	{"specversion":"1.0","id":"a746f2c8-c1ff-4bbd-84a2-e16eedfab503","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19364-1180294/.minikube"}}
	{"specversion":"1.0","id":"989ee73f-5b9a-4c38-9186-8ba44ab0030f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"b98ddb62-a306-469f-b505-e38691e71651","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"3a3cb687-e81d-4695-b72e-5be707fd0110","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-390183" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p json-output-error-390183
--- PASS: TestErrorJSONOutput (0.22s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (38.47s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-127901 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-127901 --network=: (36.430425512s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-127901" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-127901
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-127901: (2.020301259s)
--- PASS: TestKicCustomNetwork/create_custom_network (38.47s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (33.81s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-012852 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-012852 --network=bridge: (31.746730798s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-012852" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-012852
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-012852: (2.040990748s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (33.81s)

                                                
                                    
x
+
TestKicExistingNetwork (32.1s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-arm64 start -p existing-network-022141 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-arm64 start -p existing-network-022141 --network=existing-network: (29.921704009s)
helpers_test.go:175: Cleaning up "existing-network-022141" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p existing-network-022141
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p existing-network-022141: (2.023208915s)
--- PASS: TestKicExistingNetwork (32.10s)

                                                
                                    
x
+
TestKicCustomSubnet (37.53s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-subnet-579389 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-subnet-579389 --subnet=192.168.60.0/24: (35.478734248s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-579389 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-579389" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p custom-subnet-579389
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p custom-subnet-579389: (2.025447496s)
--- PASS: TestKicCustomSubnet (37.53s)

                                                
                                    
x
+
TestKicStaticIP (33.44s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-arm64 start -p static-ip-224618 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-arm64 start -p static-ip-224618 --static-ip=192.168.200.200: (31.256281503s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-arm64 -p static-ip-224618 ip
helpers_test.go:175: Cleaning up "static-ip-224618" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p static-ip-224618
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p static-ip-224618: (2.034240974s)
--- PASS: TestKicStaticIP (33.44s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-arm64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (67.57s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p first-649051 --driver=docker  --container-runtime=containerd
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p first-649051 --driver=docker  --container-runtime=containerd: (29.693466102s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p second-651554 --driver=docker  --container-runtime=containerd
E0803 23:17:18.643713 1185702 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-1180294/.minikube/profiles/functional-851027/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p second-651554 --driver=docker  --container-runtime=containerd: (32.653922685s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile first-649051
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile second-651554
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
helpers_test.go:175: Cleaning up "second-651554" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p second-651554
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p second-651554: (2.008190641s)
helpers_test.go:175: Cleaning up "first-649051" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p first-649051
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p first-649051: (2.037028913s)
--- PASS: TestMinikubeProfile (67.57s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (6.43s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-1-879296 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-1-879296 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd: (5.43203461s)
--- PASS: TestMountStart/serial/StartWithMountFirst (6.43s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.25s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-1-879296 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.25s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (6.19s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-896436 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd
E0803 23:17:48.213461 1185702 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-1180294/.minikube/profiles/addons-369401/client.crt: no such file or directory
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-896436 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd: (5.191613185s)
--- PASS: TestMountStart/serial/StartWithMountSecond (6.19s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-896436 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.26s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.57s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p mount-start-1-879296 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p mount-start-1-879296 --alsologtostderr -v=5: (1.57474541s)
--- PASS: TestMountStart/serial/DeleteFirst (1.57s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-896436 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.26s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.2s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-arm64 stop -p mount-start-2-896436
mount_start_test.go:155: (dbg) Done: out/minikube-linux-arm64 stop -p mount-start-2-896436: (1.20347638s)
--- PASS: TestMountStart/serial/Stop (1.20s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (7.41s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-896436
mount_start_test.go:166: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-896436: (6.404881947s)
--- PASS: TestMountStart/serial/RestartStopped (7.41s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.25s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-896436 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.25s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (78.65s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-321147 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd
E0803 23:19:11.269167 1185702 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-1180294/.minikube/profiles/addons-369401/client.crt: no such file or directory
multinode_test.go:96: (dbg) Done: out/minikube-linux-arm64 start -p multinode-321147 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd: (1m18.143799206s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-arm64 -p multinode-321147 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (78.65s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (18.01s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-321147 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-321147 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-arm64 kubectl -p multinode-321147 -- rollout status deployment/busybox: (16.13544104s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-321147 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-321147 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-321147 -- exec busybox-fc5497c4f-n2674 -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-321147 -- exec busybox-fc5497c4f-p426v -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-321147 -- exec busybox-fc5497c4f-n2674 -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-321147 -- exec busybox-fc5497c4f-p426v -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-321147 -- exec busybox-fc5497c4f-n2674 -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-321147 -- exec busybox-fc5497c4f-p426v -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (18.01s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.96s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-321147 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-321147 -- exec busybox-fc5497c4f-n2674 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-321147 -- exec busybox-fc5497c4f-n2674 -- sh -c "ping -c 1 192.168.58.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-321147 -- exec busybox-fc5497c4f-p426v -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-321147 -- exec busybox-fc5497c4f-p426v -- sh -c "ping -c 1 192.168.58.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.96s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (15.94s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-321147 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-arm64 node add -p multinode-321147 -v 3 --alsologtostderr: (15.209889161s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-arm64 -p multinode-321147 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (15.94s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-321147 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.10s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.32s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.32s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (10.34s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-arm64 -p multinode-321147 status --output json --alsologtostderr
multinode_test.go:184: (dbg) Done: out/minikube-linux-arm64 -p multinode-321147 status --output json --alsologtostderr: (1.011769907s)
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-321147 cp testdata/cp-test.txt multinode-321147:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-321147 ssh -n multinode-321147 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-321147 cp multinode-321147:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1581409346/001/cp-test_multinode-321147.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-321147 ssh -n multinode-321147 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-321147 cp multinode-321147:/home/docker/cp-test.txt multinode-321147-m02:/home/docker/cp-test_multinode-321147_multinode-321147-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-321147 ssh -n multinode-321147 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-321147 ssh -n multinode-321147-m02 "sudo cat /home/docker/cp-test_multinode-321147_multinode-321147-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-321147 cp multinode-321147:/home/docker/cp-test.txt multinode-321147-m03:/home/docker/cp-test_multinode-321147_multinode-321147-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-321147 ssh -n multinode-321147 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-321147 ssh -n multinode-321147-m03 "sudo cat /home/docker/cp-test_multinode-321147_multinode-321147-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-321147 cp testdata/cp-test.txt multinode-321147-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-321147 ssh -n multinode-321147-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-321147 cp multinode-321147-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1581409346/001/cp-test_multinode-321147-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-321147 ssh -n multinode-321147-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-321147 cp multinode-321147-m02:/home/docker/cp-test.txt multinode-321147:/home/docker/cp-test_multinode-321147-m02_multinode-321147.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-321147 ssh -n multinode-321147-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-321147 ssh -n multinode-321147 "sudo cat /home/docker/cp-test_multinode-321147-m02_multinode-321147.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-321147 cp multinode-321147-m02:/home/docker/cp-test.txt multinode-321147-m03:/home/docker/cp-test_multinode-321147-m02_multinode-321147-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-321147 ssh -n multinode-321147-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-321147 ssh -n multinode-321147-m03 "sudo cat /home/docker/cp-test_multinode-321147-m02_multinode-321147-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-321147 cp testdata/cp-test.txt multinode-321147-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-321147 ssh -n multinode-321147-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-321147 cp multinode-321147-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1581409346/001/cp-test_multinode-321147-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-321147 ssh -n multinode-321147-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-321147 cp multinode-321147-m03:/home/docker/cp-test.txt multinode-321147:/home/docker/cp-test_multinode-321147-m03_multinode-321147.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-321147 ssh -n multinode-321147-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-321147 ssh -n multinode-321147 "sudo cat /home/docker/cp-test_multinode-321147-m03_multinode-321147.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-321147 cp multinode-321147-m03:/home/docker/cp-test.txt multinode-321147-m02:/home/docker/cp-test_multinode-321147-m03_multinode-321147-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-321147 ssh -n multinode-321147-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-321147 ssh -n multinode-321147-m02 "sudo cat /home/docker/cp-test_multinode-321147-m03_multinode-321147-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (10.34s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.26s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-arm64 -p multinode-321147 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-arm64 -p multinode-321147 node stop m03: (1.227886166s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-arm64 -p multinode-321147 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-321147 status: exit status 7 (501.159364ms)

                                                
                                                
-- stdout --
	multinode-321147
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-321147-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-321147-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p multinode-321147 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-321147 status --alsologtostderr: exit status 7 (529.659209ms)

                                                
                                                
-- stdout --
	multinode-321147
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-321147-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-321147-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0803 23:20:11.801838 1308116 out.go:291] Setting OutFile to fd 1 ...
	I0803 23:20:11.802018 1308116 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 23:20:11.802030 1308116 out.go:304] Setting ErrFile to fd 2...
	I0803 23:20:11.802036 1308116 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 23:20:11.802310 1308116 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19364-1180294/.minikube/bin
	I0803 23:20:11.802530 1308116 out.go:298] Setting JSON to false
	I0803 23:20:11.802579 1308116 mustload.go:65] Loading cluster: multinode-321147
	I0803 23:20:11.802687 1308116 notify.go:220] Checking for updates...
	I0803 23:20:11.803050 1308116 config.go:182] Loaded profile config "multinode-321147": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.30.3
	I0803 23:20:11.803071 1308116 status.go:255] checking status of multinode-321147 ...
	I0803 23:20:11.803923 1308116 cli_runner.go:164] Run: docker container inspect multinode-321147 --format={{.State.Status}}
	I0803 23:20:11.823262 1308116 status.go:330] multinode-321147 host status = "Running" (err=<nil>)
	I0803 23:20:11.823299 1308116 host.go:66] Checking if "multinode-321147" exists ...
	I0803 23:20:11.823579 1308116 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-321147
	I0803 23:20:11.839635 1308116 host.go:66] Checking if "multinode-321147" exists ...
	I0803 23:20:11.839969 1308116 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0803 23:20:11.840028 1308116 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-321147
	I0803 23:20:11.861731 1308116 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34393 SSHKeyPath:/home/jenkins/minikube-integration/19364-1180294/.minikube/machines/multinode-321147/id_rsa Username:docker}
	I0803 23:20:11.954330 1308116 ssh_runner.go:195] Run: systemctl --version
	I0803 23:20:11.958940 1308116 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0803 23:20:11.970502 1308116 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0803 23:20:12.041648 1308116 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:41 OomKillDisable:true NGoroutines:61 SystemTime:2024-08-03 23:20:12.030441333 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1066-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214896640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41 Expected:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1]] Warnings:<nil>}}
	I0803 23:20:12.042314 1308116 kubeconfig.go:125] found "multinode-321147" server: "https://192.168.58.2:8443"
	I0803 23:20:12.042353 1308116 api_server.go:166] Checking apiserver status ...
	I0803 23:20:12.042406 1308116 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0803 23:20:12.057503 1308116 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1509/cgroup
	I0803 23:20:12.068469 1308116 api_server.go:182] apiserver freezer: "4:freezer:/docker/96526b256e9ebb66db2ef045c5e26715a67f51c0247d3c095dbb26109d1f022e/kubepods/burstable/pod15f792ca5b7ed014873bbfe23ded0b72/6e75afe59c552c3d4aea7bcfa1391e1bcba27903a61750a29f16e13042d5d44e"
	I0803 23:20:12.068572 1308116 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/96526b256e9ebb66db2ef045c5e26715a67f51c0247d3c095dbb26109d1f022e/kubepods/burstable/pod15f792ca5b7ed014873bbfe23ded0b72/6e75afe59c552c3d4aea7bcfa1391e1bcba27903a61750a29f16e13042d5d44e/freezer.state
	I0803 23:20:12.078769 1308116 api_server.go:204] freezer state: "THAWED"
	I0803 23:20:12.078797 1308116 api_server.go:253] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I0803 23:20:12.086927 1308116 api_server.go:279] https://192.168.58.2:8443/healthz returned 200:
	ok
	I0803 23:20:12.086956 1308116 status.go:422] multinode-321147 apiserver status = Running (err=<nil>)
	I0803 23:20:12.086968 1308116 status.go:257] multinode-321147 status: &{Name:multinode-321147 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0803 23:20:12.086985 1308116 status.go:255] checking status of multinode-321147-m02 ...
	I0803 23:20:12.087317 1308116 cli_runner.go:164] Run: docker container inspect multinode-321147-m02 --format={{.State.Status}}
	I0803 23:20:12.106659 1308116 status.go:330] multinode-321147-m02 host status = "Running" (err=<nil>)
	I0803 23:20:12.106730 1308116 host.go:66] Checking if "multinode-321147-m02" exists ...
	I0803 23:20:12.107075 1308116 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-321147-m02
	I0803 23:20:12.139280 1308116 host.go:66] Checking if "multinode-321147-m02" exists ...
	I0803 23:20:12.139670 1308116 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0803 23:20:12.139718 1308116 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-321147-m02
	I0803 23:20:12.157434 1308116 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34398 SSHKeyPath:/home/jenkins/minikube-integration/19364-1180294/.minikube/machines/multinode-321147-m02/id_rsa Username:docker}
	I0803 23:20:12.250261 1308116 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0803 23:20:12.262059 1308116 status.go:257] multinode-321147-m02 status: &{Name:multinode-321147-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0803 23:20:12.262095 1308116 status.go:255] checking status of multinode-321147-m03 ...
	I0803 23:20:12.262410 1308116 cli_runner.go:164] Run: docker container inspect multinode-321147-m03 --format={{.State.Status}}
	I0803 23:20:12.279020 1308116 status.go:330] multinode-321147-m03 host status = "Stopped" (err=<nil>)
	I0803 23:20:12.279044 1308116 status.go:343] host is not running, skipping remaining checks
	I0803 23:20:12.279051 1308116 status.go:257] multinode-321147-m03 status: &{Name:multinode-321147-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.26s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (9.39s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-arm64 -p multinode-321147 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-arm64 -p multinode-321147 node start m03 -v=7 --alsologtostderr: (8.610750138s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-arm64 -p multinode-321147 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (9.39s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (89.84s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-321147
multinode_test.go:321: (dbg) Run:  out/minikube-linux-arm64 stop -p multinode-321147
multinode_test.go:321: (dbg) Done: out/minikube-linux-arm64 stop -p multinode-321147: (24.97096436s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-321147 --wait=true -v=8 --alsologtostderr
multinode_test.go:326: (dbg) Done: out/minikube-linux-arm64 start -p multinode-321147 --wait=true -v=8 --alsologtostderr: (1m4.747133852s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-321147
--- PASS: TestMultiNode/serial/RestartKeepsNodes (89.84s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.38s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-arm64 -p multinode-321147 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-arm64 -p multinode-321147 node delete m03: (4.739120017s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p multinode-321147 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.38s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (24s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-arm64 -p multinode-321147 stop
E0803 23:22:18.644133 1185702 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-1180294/.minikube/profiles/functional-851027/client.crt: no such file or directory
multinode_test.go:345: (dbg) Done: out/minikube-linux-arm64 -p multinode-321147 stop: (23.83199827s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-arm64 -p multinode-321147 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-321147 status: exit status 7 (85.067839ms)

                                                
                                                
-- stdout --
	multinode-321147
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-321147-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-arm64 -p multinode-321147 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-321147 status --alsologtostderr: exit status 7 (82.100901ms)

                                                
                                                
-- stdout --
	multinode-321147
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-321147-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0803 23:22:20.861638 1316129 out.go:291] Setting OutFile to fd 1 ...
	I0803 23:22:20.861792 1316129 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 23:22:20.861803 1316129 out.go:304] Setting ErrFile to fd 2...
	I0803 23:22:20.861808 1316129 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 23:22:20.862056 1316129 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19364-1180294/.minikube/bin
	I0803 23:22:20.862249 1316129 out.go:298] Setting JSON to false
	I0803 23:22:20.862291 1316129 mustload.go:65] Loading cluster: multinode-321147
	I0803 23:22:20.862451 1316129 notify.go:220] Checking for updates...
	I0803 23:22:20.862730 1316129 config.go:182] Loaded profile config "multinode-321147": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.30.3
	I0803 23:22:20.862749 1316129 status.go:255] checking status of multinode-321147 ...
	I0803 23:22:20.863274 1316129 cli_runner.go:164] Run: docker container inspect multinode-321147 --format={{.State.Status}}
	I0803 23:22:20.880716 1316129 status.go:330] multinode-321147 host status = "Stopped" (err=<nil>)
	I0803 23:22:20.880763 1316129 status.go:343] host is not running, skipping remaining checks
	I0803 23:22:20.880770 1316129 status.go:257] multinode-321147 status: &{Name:multinode-321147 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0803 23:22:20.880806 1316129 status.go:255] checking status of multinode-321147-m02 ...
	I0803 23:22:20.881157 1316129 cli_runner.go:164] Run: docker container inspect multinode-321147-m02 --format={{.State.Status}}
	I0803 23:22:20.897481 1316129 status.go:330] multinode-321147-m02 host status = "Stopped" (err=<nil>)
	I0803 23:22:20.897506 1316129 status.go:343] host is not running, skipping remaining checks
	I0803 23:22:20.897520 1316129 status.go:257] multinode-321147-m02 status: &{Name:multinode-321147-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (24.00s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (56.89s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-321147 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd
E0803 23:22:48.212554 1185702 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-1180294/.minikube/profiles/addons-369401/client.crt: no such file or directory
multinode_test.go:376: (dbg) Done: out/minikube-linux-arm64 start -p multinode-321147 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd: (56.225057615s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-arm64 -p multinode-321147 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (56.89s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (32.19s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-321147
multinode_test.go:464: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-321147-m02 --driver=docker  --container-runtime=containerd
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p multinode-321147-m02 --driver=docker  --container-runtime=containerd: exit status 14 (98.695311ms)

                                                
                                                
-- stdout --
	* [multinode-321147-m02] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19364
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19364-1180294/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19364-1180294/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-321147-m02' is duplicated with machine name 'multinode-321147-m02' in profile 'multinode-321147'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-321147-m03 --driver=docker  --container-runtime=containerd
E0803 23:23:41.687924 1185702 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-1180294/.minikube/profiles/functional-851027/client.crt: no such file or directory
multinode_test.go:472: (dbg) Done: out/minikube-linux-arm64 start -p multinode-321147-m03 --driver=docker  --container-runtime=containerd: (29.7460045s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-321147
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-arm64 node add -p multinode-321147: exit status 80 (309.511066ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-321147 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-321147-m03 already exists in multinode-321147-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_1.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-arm64 delete -p multinode-321147-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-arm64 delete -p multinode-321147-m03: (1.977394646s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (32.19s)

                                                
                                    
x
+
TestPreload (114.88s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-811591 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.24.4
preload_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-811591 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.24.4: (1m18.474172212s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-811591 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-arm64 -p test-preload-811591 image pull gcr.io/k8s-minikube/busybox: (1.180959346s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-arm64 stop -p test-preload-811591
preload_test.go:58: (dbg) Done: out/minikube-linux-arm64 stop -p test-preload-811591: (12.125566364s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-811591 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=containerd
preload_test.go:66: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-811591 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=containerd: (20.31362126s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-811591 image list
helpers_test.go:175: Cleaning up "test-preload-811591" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p test-preload-811591
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p test-preload-811591: (2.448834062s)
--- PASS: TestPreload (114.88s)

                                                
                                    
x
+
TestScheduledStopUnix (108.26s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-arm64 start -p scheduled-stop-381968 --memory=2048 --driver=docker  --container-runtime=containerd
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-arm64 start -p scheduled-stop-381968 --memory=2048 --driver=docker  --container-runtime=containerd: (31.434811336s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-381968 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-arm64 status --format={{.TimeToStop}} -p scheduled-stop-381968 -n scheduled-stop-381968
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-381968 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-381968 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-381968 -n scheduled-stop-381968
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-381968
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-381968 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
E0803 23:27:18.644149 1185702 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-1180294/.minikube/profiles/functional-851027/client.crt: no such file or directory
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-381968
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p scheduled-stop-381968: exit status 7 (64.421451ms)

                                                
                                                
-- stdout --
	scheduled-stop-381968
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-381968 -n scheduled-stop-381968
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-381968 -n scheduled-stop-381968: exit status 7 (64.742508ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-381968" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p scheduled-stop-381968
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p scheduled-stop-381968: (5.310608061s)
--- PASS: TestScheduledStopUnix (108.26s)

                                                
                                    
x
+
TestInsufficientStorage (10.7s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-arm64 start -p insufficient-storage-076041 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=containerd
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p insufficient-storage-076041 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=containerd: exit status 26 (8.203526101s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"9072fed6-ab73-4fed-880c-89c8e43efd34","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-076041] minikube v1.33.1 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"8ce4112d-1337-4855-8485-97fd0db263bb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19364"}}
	{"specversion":"1.0","id":"42261746-737a-4f1f-b51f-382cf51d2d8e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"ac125d1a-a0d8-4c43-a8bf-56f1abb90202","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19364-1180294/kubeconfig"}}
	{"specversion":"1.0","id":"aa72a590-3fe2-4390-b9ab-1577663d3730","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19364-1180294/.minikube"}}
	{"specversion":"1.0","id":"40d581af-07ac-4a5d-be1d-5db201d7fcff","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"e8c841a8-54cf-4b3d-a41c-9b1967ab65c1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"1ebe088f-3bea-4740-ab5f-dd4d217b571a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"c5eeddad-2a05-4b3e-8276-2740a228cc15","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"30aa6f51-7d7c-4460-ae12-97e565bd3fc1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"2c349d12-b670-4036-94cb-880757a895d1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"65499bb2-e8f5-4be5-b46c-2a0a301e6569","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-076041\" primary control-plane node in \"insufficient-storage-076041\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"8a79fe09-1b82-489c-bed4-8c0dfd34d17e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.44-1721902582-19326 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"b10a7fc0-e3e7-44a5-9da9-2592afdb9ede","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"6a58a9ba-8191-4c95-98c4-90d286e02a4d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100%% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-076041 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-076041 --output=json --layout=cluster: exit status 7 (294.047545ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-076041","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.33.1","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-076041","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0803 23:27:45.603255 1334739 status.go:417] kubeconfig endpoint: get endpoint: "insufficient-storage-076041" does not appear in /home/jenkins/minikube-integration/19364-1180294/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-076041 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-076041 --output=json --layout=cluster: exit status 7 (290.199521ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-076041","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.33.1","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-076041","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0803 23:27:45.895982 1334799 status.go:417] kubeconfig endpoint: get endpoint: "insufficient-storage-076041" does not appear in /home/jenkins/minikube-integration/19364-1180294/kubeconfig
	E0803 23:27:45.905801 1334799 status.go:560] unable to read event log: stat: stat /home/jenkins/minikube-integration/19364-1180294/.minikube/profiles/insufficient-storage-076041/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-076041" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p insufficient-storage-076041
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p insufficient-storage-076041: (1.913259547s)
--- PASS: TestInsufficientStorage (10.70s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (98.9s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.450677912 start -p running-upgrade-492836 --memory=2200 --vm-driver=docker  --container-runtime=containerd
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.450677912 start -p running-upgrade-492836 --memory=2200 --vm-driver=docker  --container-runtime=containerd: (48.546332903s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-arm64 start -p running-upgrade-492836 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-arm64 start -p running-upgrade-492836 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (46.256104233s)
helpers_test.go:175: Cleaning up "running-upgrade-492836" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p running-upgrade-492836
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p running-upgrade-492836: (3.232938889s)
--- PASS: TestRunningBinaryUpgrade (98.90s)

                                                
                                    
x
+
TestKubernetesUpgrade (361.83s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-139734 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-139734 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (1m2.954542802s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-arm64 stop -p kubernetes-upgrade-139734
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-arm64 stop -p kubernetes-upgrade-139734: (1.348200625s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-arm64 -p kubernetes-upgrade-139734 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-arm64 -p kubernetes-upgrade-139734 status --format={{.Host}}: exit status 7 (160.322411ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-139734 --memory=2200 --kubernetes-version=v1.31.0-rc.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-139734 --memory=2200 --kubernetes-version=v1.31.0-rc.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (4m43.58106849s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-139734 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-139734 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p kubernetes-upgrade-139734 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=containerd: exit status 106 (106.279407ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-139734] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19364
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19364-1180294/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19364-1180294/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.31.0-rc.0 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-139734
	    minikube start -p kubernetes-upgrade-139734 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-1397342 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.31.0-rc.0, by running:
	    
	    minikube start -p kubernetes-upgrade-139734 --kubernetes-version=v1.31.0-rc.0
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-139734 --memory=2200 --kubernetes-version=v1.31.0-rc.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-139734 --memory=2200 --kubernetes-version=v1.31.0-rc.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (11.005730757s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-139734" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubernetes-upgrade-139734
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p kubernetes-upgrade-139734: (2.56893738s)
--- PASS: TestKubernetesUpgrade (361.83s)

                                                
                                    
x
+
TestMissingContainerUpgrade (151.23s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.26.0.4133204587 start -p missing-upgrade-378787 --memory=2200 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.26.0.4133204587 start -p missing-upgrade-378787 --memory=2200 --driver=docker  --container-runtime=containerd: (1m16.315043926s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-378787
version_upgrade_test.go:318: (dbg) Done: docker stop missing-upgrade-378787: (10.326118864s)
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-378787
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-arm64 start -p missing-upgrade-378787 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-arm64 start -p missing-upgrade-378787 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (1m1.025972204s)
helpers_test.go:175: Cleaning up "missing-upgrade-378787" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p missing-upgrade-378787
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p missing-upgrade-378787: (2.297518049s)
--- PASS: TestMissingContainerUpgrade (151.23s)

                                                
                                    
x
+
TestPause/serial/Start (68.47s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -p pause-596879 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=containerd
pause_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -p pause-596879 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=containerd: (1m8.470465284s)
--- PASS: TestPause/serial/Start (68.47s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-328404 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p NoKubernetes-328404 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=containerd: exit status 14 (98.476056ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-328404] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19364
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19364-1180294/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19364-1180294/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (43.93s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-328404 --driver=docker  --container-runtime=containerd
E0803 23:27:48.213220 1185702 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-1180294/.minikube/profiles/addons-369401/client.crt: no such file or directory
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-328404 --driver=docker  --container-runtime=containerd: (43.552070906s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-328404 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (43.93s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (16.59s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-328404 --no-kubernetes --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-328404 --no-kubernetes --driver=docker  --container-runtime=containerd: (14.331553229s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-328404 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-arm64 -p NoKubernetes-328404 status -o json: exit status 2 (309.723948ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-328404","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-arm64 delete -p NoKubernetes-328404
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-arm64 delete -p NoKubernetes-328404: (1.945598058s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (16.59s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (5.58s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-328404 --no-kubernetes --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-328404 --no-kubernetes --driver=docker  --container-runtime=containerd: (5.5749278s)
--- PASS: TestNoKubernetes/serial/Start (5.58s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.27s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-328404 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-328404 "sudo systemctl is-active --quiet service kubelet": exit status 1 (271.430582ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.27s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.01s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-arm64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-arm64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.01s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.25s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-arm64 stop -p NoKubernetes-328404
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-arm64 stop -p NoKubernetes-328404: (1.245483329s)
--- PASS: TestNoKubernetes/serial/Stop (1.25s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (6.68s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-arm64 start -p pause-596879 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
pause_test.go:92: (dbg) Done: out/minikube-linux-arm64 start -p pause-596879 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (6.649951275s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (6.68s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (6.86s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-328404 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-328404 --driver=docker  --container-runtime=containerd: (6.857536337s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (6.86s)

                                                
                                    
x
+
TestPause/serial/Pause (0.91s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-596879 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.91s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.32s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-328404 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-328404 "sudo systemctl is-active --quiet service kubelet": exit status 1 (314.912171ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.32s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.38s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p pause-596879 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p pause-596879 --output=json --layout=cluster: exit status 2 (378.910775ms)

                                                
                                                
-- stdout --
	{"Name":"pause-596879","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 7 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.33.1","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-596879","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.38s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.68s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-arm64 unpause -p pause-596879 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.68s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.9s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-596879 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.90s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (3.02s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p pause-596879 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p pause-596879 --alsologtostderr -v=5: (3.014394491s)
--- PASS: TestPause/serial/DeletePaused (3.02s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.16s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-596879
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-596879: exit status 1 (27.231659ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-596879: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (0.16s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.71s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.71s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (110.34s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.1186395455 start -p stopped-upgrade-549915 --memory=2200 --vm-driver=docker  --container-runtime=containerd
E0803 23:32:18.644012 1185702 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-1180294/.minikube/profiles/functional-851027/client.crt: no such file or directory
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.1186395455 start -p stopped-upgrade-549915 --memory=2200 --vm-driver=docker  --container-runtime=containerd: (45.251944989s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.1186395455 -p stopped-upgrade-549915 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.1186395455 -p stopped-upgrade-549915 stop: (20.004064806s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-arm64 start -p stopped-upgrade-549915 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
E0803 23:32:48.213490 1185702 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-1180294/.minikube/profiles/addons-369401/client.crt: no such file or directory
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-arm64 start -p stopped-upgrade-549915 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (45.084356946s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (110.34s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.04s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-arm64 logs -p stopped-upgrade-549915
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-arm64 logs -p stopped-upgrade-549915: (1.042044608s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.04s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (5.5s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-arm64 start -p false-374898 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=containerd
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p false-374898 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=containerd: exit status 14 (255.115213ms)

                                                
                                                
-- stdout --
	* [false-374898] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19364
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19364-1180294/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19364-1180294/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0803 23:35:15.785159 1373142 out.go:291] Setting OutFile to fd 1 ...
	I0803 23:35:15.785754 1373142 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 23:35:15.785808 1373142 out.go:304] Setting ErrFile to fd 2...
	I0803 23:35:15.785833 1373142 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0803 23:35:15.786096 1373142 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19364-1180294/.minikube/bin
	I0803 23:35:15.786542 1373142 out.go:298] Setting JSON to false
	I0803 23:35:15.787492 1373142 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":29861,"bootTime":1722698255,"procs":172,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0803 23:35:15.787587 1373142 start.go:139] virtualization:  
	I0803 23:35:15.800169 1373142 out.go:177] * [false-374898] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	I0803 23:35:15.802343 1373142 out.go:177]   - MINIKUBE_LOCATION=19364
	I0803 23:35:15.802902 1373142 notify.go:220] Checking for updates...
	I0803 23:35:15.807713 1373142 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0803 23:35:15.809734 1373142 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19364-1180294/kubeconfig
	I0803 23:35:15.814732 1373142 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19364-1180294/.minikube
	I0803 23:35:15.830664 1373142 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0803 23:35:15.834572 1373142 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0803 23:35:15.838789 1373142 config.go:182] Loaded profile config "force-systemd-flag-272336": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.30.3
	I0803 23:35:15.838939 1373142 driver.go:392] Setting default libvirt URI to qemu:///system
	I0803 23:35:15.867511 1373142 docker.go:123] docker version: linux-27.1.1:Docker Engine - Community
	I0803 23:35:15.867619 1373142 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0803 23:35:15.958223 1373142 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:37 OomKillDisable:true NGoroutines:58 SystemTime:2024-08-03 23:35:15.948308651 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1066-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214896640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41 Expected:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1]] Warnings:<nil>}}
	I0803 23:35:15.958333 1373142 docker.go:307] overlay module found
	I0803 23:35:15.972130 1373142 out.go:177] * Using the docker driver based on user configuration
	I0803 23:35:15.973973 1373142 start.go:297] selected driver: docker
	I0803 23:35:15.974000 1373142 start.go:901] validating driver "docker" against <nil>
	I0803 23:35:15.974016 1373142 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0803 23:35:15.976307 1373142 out.go:177] 
	W0803 23:35:15.978149 1373142 out.go:239] X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	I0803 23:35:15.980033 1373142 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-374898 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-374898

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-374898

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-374898

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-374898

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-374898

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-374898

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-374898

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-374898

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-374898

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-374898

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-374898" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-374898"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-374898" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-374898"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-374898" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-374898"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-374898

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-374898" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-374898"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-374898" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-374898"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-374898" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-374898" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-374898" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-374898" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-374898" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-374898" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-374898" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-374898" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-374898" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-374898"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-374898" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-374898"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-374898" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-374898"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-374898" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-374898"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-374898" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-374898"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-374898" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-374898" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-374898" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-374898" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-374898"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-374898" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-374898"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-374898" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-374898"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-374898" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-374898"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-374898" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-374898"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-374898

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-374898" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-374898"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-374898" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-374898"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-374898" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-374898"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-374898" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-374898"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-374898" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-374898"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-374898" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-374898"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-374898" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-374898"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-374898" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-374898"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-374898" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-374898"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-374898" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-374898"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-374898" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-374898"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-374898" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-374898"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-374898" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-374898"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-374898" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-374898"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-374898" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-374898"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-374898" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-374898"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-374898" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-374898"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-374898" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-374898"

                                                
                                                
----------------------- debugLogs end: false-374898 [took: 5.01297828s] --------------------------------
helpers_test.go:175: Cleaning up "false-374898" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p false-374898
--- PASS: TestNetworkPlugins/group/false (5.50s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (117.49s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-820414 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.20.0
E0803 23:37:18.644164 1185702 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-1180294/.minikube/profiles/functional-851027/client.crt: no such file or directory
E0803 23:37:48.213309 1185702 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-1180294/.minikube/profiles/addons-369401/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-820414 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.20.0: (1m57.494692225s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (117.49s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (8.62s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-820414 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [cdd5d965-395b-433b-93d7-c79fe0b4a2a6] Pending
helpers_test.go:344: "busybox" [cdd5d965-395b-433b-93d7-c79fe0b4a2a6] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [cdd5d965-395b-433b-93d7-c79fe0b4a2a6] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 8.005619939s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-820414 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (8.62s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.19s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-820414 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-820414 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.19s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (12.11s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p old-k8s-version-820414 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p old-k8s-version-820414 --alsologtostderr -v=3: (12.110984671s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (12.11s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-820414 -n old-k8s-version-820414
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-820414 -n old-k8s-version-820414: exit status 7 (68.419444ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p old-k8s-version-820414 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (84.77s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-344284 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.0-rc.0
E0803 23:40:21.688889 1185702 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-1180294/.minikube/profiles/functional-851027/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-344284 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.0-rc.0: (1m24.772177382s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (84.77s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (9.36s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-344284 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [2818ff3a-7236-4c33-958b-e564c556d719] Pending
helpers_test.go:344: "busybox" [2818ff3a-7236-4c33-958b-e564c556d719] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [2818ff3a-7236-4c33-958b-e564c556d719] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 9.00278223s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-344284 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (9.36s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.15s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p no-preload-344284 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p no-preload-344284 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.028756814s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-344284 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.15s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (12.08s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p no-preload-344284 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p no-preload-344284 --alsologtostderr -v=3: (12.081455527s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (12.08s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-344284 -n no-preload-344284
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-344284 -n no-preload-344284: exit status 7 (69.999667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p no-preload-344284 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (268.18s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-344284 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.0-rc.0
E0803 23:42:18.643974 1185702 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-1180294/.minikube/profiles/functional-851027/client.crt: no such file or directory
E0803 23:42:48.212972 1185702 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-1180294/.minikube/profiles/addons-369401/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-344284 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.0-rc.0: (4m27.814182928s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-344284 -n no-preload-344284
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (268.18s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-8cz7j" [63ab1fcc-d365-494a-8f1f-45f99bd1fa36] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004030783s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.21s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-8cz7j" [63ab1fcc-d365-494a-8f1f-45f99bd1fa36] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.008075909s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-820414 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.21s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.33s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-820414 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240202-8f1494ea
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240719-e7903573
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.33s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (3.14s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p old-k8s-version-820414 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-820414 -n old-k8s-version-820414
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-820414 -n old-k8s-version-820414: exit status 2 (331.76198ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-820414 -n old-k8s-version-820414
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-820414 -n old-k8s-version-820414: exit status 2 (304.777463ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p old-k8s-version-820414 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-820414 -n old-k8s-version-820414
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-820414 -n old-k8s-version-820414
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (3.14s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (63.09s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-850046 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.30.3
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-850046 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.30.3: (1m3.092472061s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (63.09s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.03s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-wm6dm" [bb3d6cbe-0236-427b-879c-1e22ea66212a] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004441599s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.03s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (6.18s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-wm6dm" [bb3d6cbe-0236-427b-879c-1e22ea66212a] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.005118753s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-344284 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (6.18s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (1s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-344284 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240719-e7903573
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (1.00s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (3.65s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p no-preload-344284 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-344284 -n no-preload-344284
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-344284 -n no-preload-344284: exit status 2 (388.205082ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-344284 -n no-preload-344284
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-344284 -n no-preload-344284: exit status 2 (386.255253ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p no-preload-344284 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-344284 -n no-preload-344284
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-344284 -n no-preload-344284
--- PASS: TestStartStop/group/no-preload/serial/Pause (3.65s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (78.42s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-178120 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.30.3
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-178120 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.30.3: (1m18.418641624s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (78.42s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (8.42s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-850046 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [def41b3a-9b82-4379-a955-6bc849d8b318] Pending
helpers_test.go:344: "busybox" [def41b3a-9b82-4379-a955-6bc849d8b318] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [def41b3a-9b82-4379-a955-6bc849d8b318] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 8.003862689s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-850046 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (8.42s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.2s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-850046 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-850046 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.039707727s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-850046 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.20s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (12.33s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p embed-certs-850046 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p embed-certs-850046 --alsologtostderr -v=3: (12.332767343s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (12.33s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-850046 -n embed-certs-850046
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-850046 -n embed-certs-850046: exit status 7 (67.959769ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p embed-certs-850046 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.17s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (280.71s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-850046 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.30.3
E0803 23:47:18.644088 1185702 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-1180294/.minikube/profiles/functional-851027/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-850046 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.30.3: (4m40.34431432s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-850046 -n embed-certs-850046
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (280.71s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (7.4s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-178120 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [9d3e902c-6806-46be-803c-1c050c424742] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [9d3e902c-6806-46be-803c-1c050c424742] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 7.004505346s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-178120 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (7.40s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.29s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-178120 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-178120 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.156571313s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-178120 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.29s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (12.36s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p default-k8s-diff-port-178120 --alsologtostderr -v=3
E0803 23:47:48.212864 1185702 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-1180294/.minikube/profiles/addons-369401/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p default-k8s-diff-port-178120 --alsologtostderr -v=3: (12.362615199s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (12.36s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-178120 -n default-k8s-diff-port-178120
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-178120 -n default-k8s-diff-port-178120: exit status 7 (71.29339ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p default-k8s-diff-port-178120 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (268.29s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-178120 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.30.3
E0803 23:48:52.351858 1185702 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-1180294/.minikube/profiles/old-k8s-version-820414/client.crt: no such file or directory
E0803 23:48:52.357105 1185702 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-1180294/.minikube/profiles/old-k8s-version-820414/client.crt: no such file or directory
E0803 23:48:52.367879 1185702 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-1180294/.minikube/profiles/old-k8s-version-820414/client.crt: no such file or directory
E0803 23:48:52.388193 1185702 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-1180294/.minikube/profiles/old-k8s-version-820414/client.crt: no such file or directory
E0803 23:48:52.428439 1185702 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-1180294/.minikube/profiles/old-k8s-version-820414/client.crt: no such file or directory
E0803 23:48:52.509446 1185702 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-1180294/.minikube/profiles/old-k8s-version-820414/client.crt: no such file or directory
E0803 23:48:52.669940 1185702 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-1180294/.minikube/profiles/old-k8s-version-820414/client.crt: no such file or directory
E0803 23:48:52.990287 1185702 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-1180294/.minikube/profiles/old-k8s-version-820414/client.crt: no such file or directory
E0803 23:48:53.631139 1185702 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-1180294/.minikube/profiles/old-k8s-version-820414/client.crt: no such file or directory
E0803 23:48:54.911677 1185702 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-1180294/.minikube/profiles/old-k8s-version-820414/client.crt: no such file or directory
E0803 23:48:57.471932 1185702 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-1180294/.minikube/profiles/old-k8s-version-820414/client.crt: no such file or directory
E0803 23:49:02.592910 1185702 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-1180294/.minikube/profiles/old-k8s-version-820414/client.crt: no such file or directory
E0803 23:49:12.833097 1185702 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-1180294/.minikube/profiles/old-k8s-version-820414/client.crt: no such file or directory
E0803 23:49:33.313327 1185702 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-1180294/.minikube/profiles/old-k8s-version-820414/client.crt: no such file or directory
E0803 23:50:14.274071 1185702 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-1180294/.minikube/profiles/old-k8s-version-820414/client.crt: no such file or directory
E0803 23:51:06.590853 1185702 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-1180294/.minikube/profiles/no-preload-344284/client.crt: no such file or directory
E0803 23:51:06.596175 1185702 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-1180294/.minikube/profiles/no-preload-344284/client.crt: no such file or directory
E0803 23:51:06.606513 1185702 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-1180294/.minikube/profiles/no-preload-344284/client.crt: no such file or directory
E0803 23:51:06.626822 1185702 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-1180294/.minikube/profiles/no-preload-344284/client.crt: no such file or directory
E0803 23:51:06.667181 1185702 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-1180294/.minikube/profiles/no-preload-344284/client.crt: no such file or directory
E0803 23:51:06.747604 1185702 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-1180294/.minikube/profiles/no-preload-344284/client.crt: no such file or directory
E0803 23:51:06.908190 1185702 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-1180294/.minikube/profiles/no-preload-344284/client.crt: no such file or directory
E0803 23:51:07.228653 1185702 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-1180294/.minikube/profiles/no-preload-344284/client.crt: no such file or directory
E0803 23:51:07.869146 1185702 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-1180294/.minikube/profiles/no-preload-344284/client.crt: no such file or directory
E0803 23:51:09.150001 1185702 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-1180294/.minikube/profiles/no-preload-344284/client.crt: no such file or directory
E0803 23:51:11.710234 1185702 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-1180294/.minikube/profiles/no-preload-344284/client.crt: no such file or directory
E0803 23:51:16.830759 1185702 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-1180294/.minikube/profiles/no-preload-344284/client.crt: no such file or directory
E0803 23:51:27.071109 1185702 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-1180294/.minikube/profiles/no-preload-344284/client.crt: no such file or directory
E0803 23:51:36.194916 1185702 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-1180294/.minikube/profiles/old-k8s-version-820414/client.crt: no such file or directory
E0803 23:51:47.551803 1185702 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-1180294/.minikube/profiles/no-preload-344284/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-178120 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.30.3: (4m27.958005667s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-178120 -n default-k8s-diff-port-178120
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (268.29s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-779776cb65-6fkvn" [895770ee-e180-4314-addb-258d798ac728] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003377292s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.12s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-779776cb65-6fkvn" [895770ee-e180-4314-addb-258d798ac728] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003717945s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-850046 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.12s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-850046 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240719-e7903573
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240715-585640e9
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (3.41s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p embed-certs-850046 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-850046 -n embed-certs-850046
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-850046 -n embed-certs-850046: exit status 2 (336.148706ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-850046 -n embed-certs-850046
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-850046 -n embed-certs-850046: exit status 2 (343.148577ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p embed-certs-850046 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-850046 -n embed-certs-850046
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-850046 -n embed-certs-850046
--- PASS: TestStartStop/group/embed-certs/serial/Pause (3.41s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (44.85s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-142458 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.0-rc.0
E0803 23:52:18.643692 1185702 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-1180294/.minikube/profiles/functional-851027/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-142458 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.0-rc.0: (44.852910181s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (44.85s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-779776cb65-42gtz" [ccc382d2-6559-4322-a469-b45436111e9c] Running
E0803 23:52:28.512809 1185702 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-1180294/.minikube/profiles/no-preload-344284/client.crt: no such file or directory
E0803 23:52:31.270957 1185702 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-1180294/.minikube/profiles/addons-369401/client.crt: no such file or directory
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.0052696s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (6.13s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-779776cb65-42gtz" [ccc382d2-6559-4322-a469-b45436111e9c] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.005068341s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-178120 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (6.13s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.32s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-178120 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240719-e7903573
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240715-585640e9
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.32s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (3.73s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p default-k8s-diff-port-178120 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-178120 -n default-k8s-diff-port-178120
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-178120 -n default-k8s-diff-port-178120: exit status 2 (381.708713ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-178120 -n default-k8s-diff-port-178120
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-178120 -n default-k8s-diff-port-178120: exit status 2 (397.008679ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p default-k8s-diff-port-178120 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-178120 -n default-k8s-diff-port-178120
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-178120 -n default-k8s-diff-port-178120
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (3.73s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (72.47s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p auto-374898 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=containerd
E0803 23:52:48.212584 1185702 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-1180294/.minikube/profiles/addons-369401/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p auto-374898 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=containerd: (1m12.472963471s)
--- PASS: TestNetworkPlugins/group/auto/Start (72.47s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.49s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-142458 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-142458 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.489343459s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.49s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (1.34s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p newest-cni-142458 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p newest-cni-142458 --alsologtostderr -v=3: (1.33766056s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (1.34s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-142458 -n newest-cni-142458
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-142458 -n newest-cni-142458: exit status 7 (109.602821ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p newest-cni-142458 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.26s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (24.59s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-142458 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.0-rc.0
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-142458 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.0-rc.0: (24.224770974s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-142458 -n newest-cni-142458
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (24.59s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.79s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-142458 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240719-e7903573
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.79s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (3.04s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p newest-cni-142458 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-142458 -n newest-cni-142458
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-142458 -n newest-cni-142458: exit status 2 (320.200747ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-142458 -n newest-cni-142458
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-142458 -n newest-cni-142458: exit status 2 (370.685642ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p newest-cni-142458 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-142458 -n newest-cni-142458
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-142458 -n newest-cni-142458
--- PASS: TestStartStop/group/newest-cni/serial/Pause (3.04s)
E0803 23:59:00.009897 1185702 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-1180294/.minikube/profiles/auto-374898/client.crt: no such file or directory
E0803 23:59:01.290877 1185702 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-1180294/.minikube/profiles/auto-374898/client.crt: no such file or directory
E0803 23:59:03.851488 1185702 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-1180294/.minikube/profiles/auto-374898/client.crt: no such file or directory
E0803 23:59:08.972316 1185702 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-1180294/.minikube/profiles/auto-374898/client.crt: no such file or directory

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (76.52s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p kindnet-374898 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=containerd
E0803 23:53:50.433933 1185702 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-1180294/.minikube/profiles/no-preload-344284/client.crt: no such file or directory
E0803 23:53:52.351997 1185702 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-1180294/.minikube/profiles/old-k8s-version-820414/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p kindnet-374898 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=containerd: (1m16.515805875s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (76.52s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p auto-374898 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (11.48s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-374898 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-jpdj2" [f35d35dd-8b15-4596-8e14-cf11a6ebf1bd] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-jpdj2" [f35d35dd-8b15-4596-8e14-cf11a6ebf1bd] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 11.004321765s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (11.48s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-374898 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-374898 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-374898 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (66.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p calico-374898 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p calico-374898 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=containerd: (1m6.31878542s)
--- PASS: TestNetworkPlugins/group/calico/Start (66.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-zt6wh" [81c701a2-70af-4f03-9f0f-98a8639c9d08] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.004668004s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.44s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p kindnet-374898 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.44s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (9.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-374898 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-lfrmc" [f95842f5-17bf-4ee4-986a-900f59fabeee] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-lfrmc" [f95842f5-17bf-4ee4-986a-900f59fabeee] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 9.003878638s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (9.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-374898 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-374898 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-374898 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (66.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-flannel-374898 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-flannel-374898 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=containerd: (1m6.16165803s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (66.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.07s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-48sgv" [b6ae5e8b-19a0-4318-974b-6fe08cfe56b9] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.044845092s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.07s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p calico-374898 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (10.48s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-374898 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-c87w4" [c6ae065b-2cf8-4660-b77c-ce91799b602d] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-c87w4" [c6ae065b-2cf8-4660-b77c-ce91799b602d] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 10.004446553s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (10.48s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-374898 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-374898 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-374898 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (86.61s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p enable-default-cni-374898 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=containerd
E0803 23:56:34.274782 1185702 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-1180294/.minikube/profiles/no-preload-344284/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p enable-default-cni-374898 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=containerd: (1m26.60888159s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (86.61s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p custom-flannel-374898 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (11.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-374898 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-ksxbv" [f760c02f-9602-4fc2-90e1-d7db511d3219] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-ksxbv" [f760c02f-9602-4fc2-90e1-d7db511d3219] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 11.020335271s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (11.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-374898 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-374898 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-374898 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (63.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p flannel-374898 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=containerd
E0803 23:57:18.643877 1185702 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-1180294/.minikube/profiles/functional-851027/client.crt: no such file or directory
E0803 23:57:36.467299 1185702 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-1180294/.minikube/profiles/default-k8s-diff-port-178120/client.crt: no such file or directory
E0803 23:57:36.472578 1185702 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-1180294/.minikube/profiles/default-k8s-diff-port-178120/client.crt: no such file or directory
E0803 23:57:36.482859 1185702 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-1180294/.minikube/profiles/default-k8s-diff-port-178120/client.crt: no such file or directory
E0803 23:57:36.503090 1185702 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-1180294/.minikube/profiles/default-k8s-diff-port-178120/client.crt: no such file or directory
E0803 23:57:36.543330 1185702 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-1180294/.minikube/profiles/default-k8s-diff-port-178120/client.crt: no such file or directory
E0803 23:57:36.623810 1185702 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-1180294/.minikube/profiles/default-k8s-diff-port-178120/client.crt: no such file or directory
E0803 23:57:36.784268 1185702 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-1180294/.minikube/profiles/default-k8s-diff-port-178120/client.crt: no such file or directory
E0803 23:57:37.104536 1185702 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-1180294/.minikube/profiles/default-k8s-diff-port-178120/client.crt: no such file or directory
E0803 23:57:37.745464 1185702 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-1180294/.minikube/profiles/default-k8s-diff-port-178120/client.crt: no such file or directory
E0803 23:57:39.026383 1185702 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-1180294/.minikube/profiles/default-k8s-diff-port-178120/client.crt: no such file or directory
E0803 23:57:41.587079 1185702 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-1180294/.minikube/profiles/default-k8s-diff-port-178120/client.crt: no such file or directory
E0803 23:57:46.707911 1185702 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-1180294/.minikube/profiles/default-k8s-diff-port-178120/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p flannel-374898 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=containerd: (1m3.099004758s)
--- PASS: TestNetworkPlugins/group/flannel/Start (63.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p enable-default-cni-374898 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-374898 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-wh2zp" [3b0be9de-6884-496b-b1b1-f43270061ebd] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0803 23:57:48.212611 1185702 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-1180294/.minikube/profiles/addons-369401/client.crt: no such file or directory
helpers_test.go:344: "netcat-6bc787d567-wh2zp" [3b0be9de-6884-496b-b1b1-f43270061ebd] Running
E0803 23:57:56.948099 1185702 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-1180294/.minikube/profiles/default-k8s-diff-port-178120/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 10.003839626s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-374898 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-374898 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-374898 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-nh9pj" [9faa2996-7520-4ea5-ae2b-d841f5a0559f] Running
E0803 23:58:17.428963 1185702 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-1180294/.minikube/profiles/default-k8s-diff-port-178120/client.crt: no such file or directory
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.004406851s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (53.62s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p bridge-374898 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p bridge-374898 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=containerd: (53.620982274s)
--- PASS: TestNetworkPlugins/group/bridge/Start (53.62s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p flannel-374898 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (10.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-374898 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-qv4z5" [e7dc72cc-7066-4909-b8d9-b202402b843e] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-qv4z5" [e7dc72cc-7066-4909-b8d9-b202402b843e] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 10.004434963s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (10.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-374898 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-374898 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-374898 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p bridge-374898 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (8.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-374898 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-cc89c" [1c446525-e5c7-4366-a000-6782e6a38f6c] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-cc89c" [1c446525-e5c7-4366-a000-6782e6a38f6c] Running
E0803 23:59:19.213320 1185702 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19364-1180294/.minikube/profiles/auto-374898/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 8.004079366s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (8.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-374898 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-374898 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-374898 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.14s)

                                                
                                    

Test skip (31/336)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.30.3/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.30.3/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.30.3/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-rc.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-rc.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.0-rc.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-rc.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-rc.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.0-rc.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-rc.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-rc.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.31.0-rc.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.54s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:232: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p download-docker-815058 --alsologtostderr --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:244: Skip for arm64 platform. See https://github.com/kubernetes/minikube/issues/10144
helpers_test.go:175: Cleaning up "download-docker-815058" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p download-docker-815058
--- SKIP: TestDownloadOnlyKic (0.54s)

                                                
                                    
x
+
TestOffline (0s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:35: skipping TestOffline - only docker runtime supported on arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestOffline (0.00s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (0s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:446: skip Helm test on arm64
--- SKIP: TestAddons/parallel/HelmTiller (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:500: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing containerd
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:45: Skip if arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1783: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:459: only validate docker env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing containerd container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-008690" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p disable-driver-mounts-008690
--- SKIP: TestStartStop/group/disable-driver-mounts (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (4.64s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as containerd container runtimes requires CNI
panic.go:626: 
----------------------- debugLogs start: kubenet-374898 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-374898

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-374898

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-374898

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-374898

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-374898

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-374898

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-374898

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-374898

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-374898

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-374898

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-374898" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-374898"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-374898" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-374898"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-374898" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-374898"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-374898

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-374898" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-374898"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-374898" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-374898"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-374898" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-374898" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-374898" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-374898" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-374898" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-374898" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-374898" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-374898" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-374898" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-374898"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-374898" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-374898"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-374898" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-374898"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-374898" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-374898"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-374898" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-374898"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-374898" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-374898" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-374898" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-374898" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-374898"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-374898" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-374898"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-374898" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-374898"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-374898" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-374898"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-374898" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-374898"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-374898

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-374898" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-374898"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-374898" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-374898"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-374898" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-374898"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-374898" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-374898"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-374898" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-374898"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-374898" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-374898"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-374898" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-374898"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-374898" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-374898"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-374898" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-374898"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-374898" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-374898"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-374898" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-374898"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-374898" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-374898"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-374898" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-374898"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-374898" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-374898"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-374898" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-374898"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-374898" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-374898"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-374898" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-374898"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-374898" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-374898"

                                                
                                                
----------------------- debugLogs end: kubenet-374898 [took: 4.443267852s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-374898" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubenet-374898
--- SKIP: TestNetworkPlugins/group/kubenet (4.64s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (4.87s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:626: 
----------------------- debugLogs start: cilium-374898 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-374898

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-374898

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-374898

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-374898

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-374898

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-374898

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-374898

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-374898

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-374898

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-374898

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-374898" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-374898"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-374898" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-374898"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-374898" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-374898"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-374898

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-374898" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-374898"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-374898" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-374898"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-374898" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-374898" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-374898" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-374898" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-374898" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-374898" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-374898" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-374898" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-374898" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-374898"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-374898" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-374898"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-374898" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-374898"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-374898" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-374898"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-374898" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-374898"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-374898

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-374898

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-374898" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-374898" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-374898

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-374898

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-374898" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-374898" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-374898" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-374898" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-374898" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-374898" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-374898"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-374898" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-374898"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-374898" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-374898"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-374898" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-374898"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-374898" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-374898"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-374898

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-374898" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-374898"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-374898" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-374898"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-374898" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-374898"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-374898" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-374898"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-374898" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-374898"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-374898" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-374898"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-374898" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-374898"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-374898" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-374898"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-374898" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-374898"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-374898" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-374898"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-374898" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-374898"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-374898" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-374898"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-374898" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-374898"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-374898" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-374898"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-374898" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-374898"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-374898" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-374898"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-374898" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-374898"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-374898" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-374898"

                                                
                                                
----------------------- debugLogs end: cilium-374898 [took: 4.691013604s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-374898" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cilium-374898
--- SKIP: TestNetworkPlugins/group/cilium (4.87s)

                                                
                                    
Copied to clipboard