Test Report: Docker_Linux_containerd_arm64 19763

                    
                      aa5eddb378ec81f2e43c808f5204b861e96187fd:2024-10-07:36541
                    
                

Test fail (2/328)

Order failed test Duration
29 TestAddons/serial/Volcano 212.31
300 TestStartStop/group/old-k8s-version/serial/SecondStart 375.9
x
+
TestAddons/serial/Volcano (212.31s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:803: volcano-scheduler stabilized in 62.55281ms
addons_test.go:811: volcano-admission stabilized in 62.697946ms
addons_test.go:819: volcano-controller stabilized in 62.747233ms
addons_test.go:825: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-scheduler-6c9778cbdf-dpq9k" [0307ae8b-aa6f-4108-ba51-4eacf8d8ed7f] Running
addons_test.go:825: (dbg) TestAddons/serial/Volcano: app=volcano-scheduler healthy within 6.030470585s
addons_test.go:829: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-admission-5874dfdd79-8sxc6" [e65085a0-6d82-445b-b1f3-11531297713b] Running
addons_test.go:829: (dbg) TestAddons/serial/Volcano: app=volcano-admission healthy within 6.004044869s
addons_test.go:833: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-controllers-789ffc5785-dnqbf" [aa710afe-1032-4fe6-b815-955c374522d3] Running
addons_test.go:833: (dbg) TestAddons/serial/Volcano: app=volcano-controller healthy within 5.003769328s
addons_test.go:838: (dbg) Run:  kubectl --context addons-268164 delete -n volcano-system job volcano-admission-init
addons_test.go:844: (dbg) Run:  kubectl --context addons-268164 create -f testdata/vcjob.yaml
addons_test.go:852: (dbg) Run:  kubectl --context addons-268164 get vcjob -n my-volcano
addons_test.go:870: (dbg) TestAddons/serial/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:344: "test-job-nginx-0" [1f22076f-cf1d-4956-817a-9fd43f024bac] Pending
helpers_test.go:344: "test-job-nginx-0" [1f22076f-cf1d-4956-817a-9fd43f024bac] Pending: PodScheduled:Unschedulable (0/1 nodes are unavailable: 1 Insufficient cpu.)
helpers_test.go:329: TestAddons/serial/Volcano: WARNING: pod list for "my-volcano" "volcano.sh/job-name=test-job" returned: client rate limiter Wait returned an error: context deadline exceeded
addons_test.go:870: ***** TestAddons/serial/Volcano: pod "volcano.sh/job-name=test-job" failed to start within 3m0s: context deadline exceeded ****
addons_test.go:870: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-268164 -n addons-268164
addons_test.go:870: TestAddons/serial/Volcano: showing logs for failed pods as of 2024-10-07 12:00:12.153805693 +0000 UTC m=+428.179271440
addons_test.go:870: (dbg) Run:  kubectl --context addons-268164 describe po test-job-nginx-0 -n my-volcano
addons_test.go:870: (dbg) kubectl --context addons-268164 describe po test-job-nginx-0 -n my-volcano:
Name:             test-job-nginx-0
Namespace:        my-volcano
Priority:         0
Service Account:  default
Node:             <none>
Labels:           volcano.sh/job-name=test-job
volcano.sh/job-namespace=my-volcano
volcano.sh/queue-name=test
volcano.sh/task-index=0
volcano.sh/task-spec=nginx
Annotations:      scheduling.k8s.io/group-name: test-job-5f7bb62b-30b3-47b5-9113-41b54b167732
volcano.sh/job-name: test-job
volcano.sh/job-retry-count: 0
volcano.sh/job-version: 0
volcano.sh/queue-name: test
volcano.sh/task-index: 0
volcano.sh/task-spec: nginx
volcano.sh/template-uid: test-job-nginx
Status:           Pending
IP:               
IPs:              <none>
Controlled By:    Job/test-job
Containers:
nginx:
Image:      nginx:latest
Port:       <none>
Host Port:  <none>
Command:
sleep
10m
Limits:
cpu:  1
Requests:
cpu:  1
Environment:
GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
PROJECT_ID:                      this_is_fake
GCP_PROJECT:                     this_is_fake
GCLOUD_PROJECT:                  this_is_fake
GOOGLE_CLOUD_PROJECT:            this_is_fake
CLOUDSDK_CORE_PROJECT:           this_is_fake
Mounts:
/google-app-creds.json from gcp-creds (ro)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-sbfzd (ro)
Conditions:
Type           Status
PodScheduled   False 
Volumes:
kube-api-access-sbfzd:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
ConfigMapOptional:       <nil>
DownwardAPI:             true
gcp-creds:
Type:          HostPath (bare host directory volume)
Path:          /var/lib/minikube/google_application_credentials.json
HostPathType:  File
QoS Class:         Burstable
Node-Selectors:    <none>
Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason            Age    From     Message
----     ------            ----   ----     -------
Warning  FailedScheduling  2m59s  volcano  0/1 nodes are unavailable: 1 Insufficient cpu.
addons_test.go:870: (dbg) Run:  kubectl --context addons-268164 logs test-job-nginx-0 -n my-volcano
addons_test.go:870: (dbg) kubectl --context addons-268164 logs test-job-nginx-0 -n my-volcano:
addons_test.go:871: failed waiting for test-local-path pod: volcano.sh/job-name=test-job within 3m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/serial/Volcano]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-268164
helpers_test.go:235: (dbg) docker inspect addons-268164:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "b7bfc36eed6a67b691964a37e76a19bb11a3f34c5081a52eded8f5bfd0e0b9a9",
	        "Created": "2024-10-07T11:53:42.507444231Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1401573,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-10-07T11:53:42.639375256Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:b5f10d57944829de859b6363a7c57065ccc6b1805dabb3bce283aaecb83f3acc",
	        "ResolvConfPath": "/var/lib/docker/containers/b7bfc36eed6a67b691964a37e76a19bb11a3f34c5081a52eded8f5bfd0e0b9a9/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/b7bfc36eed6a67b691964a37e76a19bb11a3f34c5081a52eded8f5bfd0e0b9a9/hostname",
	        "HostsPath": "/var/lib/docker/containers/b7bfc36eed6a67b691964a37e76a19bb11a3f34c5081a52eded8f5bfd0e0b9a9/hosts",
	        "LogPath": "/var/lib/docker/containers/b7bfc36eed6a67b691964a37e76a19bb11a3f34c5081a52eded8f5bfd0e0b9a9/b7bfc36eed6a67b691964a37e76a19bb11a3f34c5081a52eded8f5bfd0e0b9a9-json.log",
	        "Name": "/addons-268164",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-268164:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-268164",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/80276808543a68745b44e464693cfa32646211611572ffffd04c55f57626e083-init/diff:/var/lib/docker/overlay2/056f79e8a8729c0886964eb01f46792a83efc9c9ba3dec7e1dde1dce89315afa/diff",
	                "MergedDir": "/var/lib/docker/overlay2/80276808543a68745b44e464693cfa32646211611572ffffd04c55f57626e083/merged",
	                "UpperDir": "/var/lib/docker/overlay2/80276808543a68745b44e464693cfa32646211611572ffffd04c55f57626e083/diff",
	                "WorkDir": "/var/lib/docker/overlay2/80276808543a68745b44e464693cfa32646211611572ffffd04c55f57626e083/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-268164",
	                "Source": "/var/lib/docker/volumes/addons-268164/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-268164",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-268164",
	                "name.minikube.sigs.k8s.io": "addons-268164",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "32d8835585c605eef3e927bbd894152b0a8d946c648a4b873b1c49a0ffce7c7e",
	            "SandboxKey": "/var/run/docker/netns/32d8835585c6",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "37896"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "37897"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "37900"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "37898"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "37899"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-268164": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null,
	                    "NetworkID": "c0572ef0d0ea3d4a90cb0fee0d617411603b60c6b7ef7e3be1d8ccd6e72a963b",
	                    "EndpointID": "21130662cad038301b339f61a38dc98572fac577fbc8b31a462eb0c43ed6743f",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-268164",
	                        "b7bfc36eed6a"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-268164 -n addons-268164
helpers_test.go:244: <<< TestAddons/serial/Volcano FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/serial/Volcano]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p addons-268164 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p addons-268164 logs -n 25: (1.690786701s)
helpers_test.go:252: TestAddons/serial/Volcano logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                 |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only              | download-only-431096   | jenkins | v1.34.0 | 07 Oct 24 11:53 UTC |                     |
	|         | -p download-only-431096              |                        |         |         |                     |                     |
	|         | --force --alsologtostderr            |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0         |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	| delete  | --all                                | minikube               | jenkins | v1.34.0 | 07 Oct 24 11:53 UTC | 07 Oct 24 11:53 UTC |
	| delete  | -p download-only-431096              | download-only-431096   | jenkins | v1.34.0 | 07 Oct 24 11:53 UTC | 07 Oct 24 11:53 UTC |
	| start   | -o=json --download-only              | download-only-149351   | jenkins | v1.34.0 | 07 Oct 24 11:53 UTC |                     |
	|         | -p download-only-149351              |                        |         |         |                     |                     |
	|         | --force --alsologtostderr            |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1         |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	| delete  | --all                                | minikube               | jenkins | v1.34.0 | 07 Oct 24 11:53 UTC | 07 Oct 24 11:53 UTC |
	| delete  | -p download-only-149351              | download-only-149351   | jenkins | v1.34.0 | 07 Oct 24 11:53 UTC | 07 Oct 24 11:53 UTC |
	| delete  | -p download-only-431096              | download-only-431096   | jenkins | v1.34.0 | 07 Oct 24 11:53 UTC | 07 Oct 24 11:53 UTC |
	| delete  | -p download-only-149351              | download-only-149351   | jenkins | v1.34.0 | 07 Oct 24 11:53 UTC | 07 Oct 24 11:53 UTC |
	| start   | --download-only -p                   | download-docker-125049 | jenkins | v1.34.0 | 07 Oct 24 11:53 UTC |                     |
	|         | download-docker-125049               |                        |         |         |                     |                     |
	|         | --alsologtostderr                    |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	| delete  | -p download-docker-125049            | download-docker-125049 | jenkins | v1.34.0 | 07 Oct 24 11:53 UTC | 07 Oct 24 11:53 UTC |
	| start   | --download-only -p                   | binary-mirror-087536   | jenkins | v1.34.0 | 07 Oct 24 11:53 UTC |                     |
	|         | binary-mirror-087536                 |                        |         |         |                     |                     |
	|         | --alsologtostderr                    |                        |         |         |                     |                     |
	|         | --binary-mirror                      |                        |         |         |                     |                     |
	|         | http://127.0.0.1:36621               |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	| delete  | -p binary-mirror-087536              | binary-mirror-087536   | jenkins | v1.34.0 | 07 Oct 24 11:53 UTC | 07 Oct 24 11:53 UTC |
	| addons  | disable dashboard -p                 | addons-268164          | jenkins | v1.34.0 | 07 Oct 24 11:53 UTC |                     |
	|         | addons-268164                        |                        |         |         |                     |                     |
	| addons  | enable dashboard -p                  | addons-268164          | jenkins | v1.34.0 | 07 Oct 24 11:53 UTC |                     |
	|         | addons-268164                        |                        |         |         |                     |                     |
	| start   | -p addons-268164 --wait=true         | addons-268164          | jenkins | v1.34.0 | 07 Oct 24 11:53 UTC | 07 Oct 24 11:56 UTC |
	|         | --memory=4000 --alsologtostderr      |                        |         |         |                     |                     |
	|         | --addons=registry                    |                        |         |         |                     |                     |
	|         | --addons=metrics-server              |                        |         |         |                     |                     |
	|         | --addons=volumesnapshots             |                        |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver         |                        |         |         |                     |                     |
	|         | --addons=gcp-auth                    |                        |         |         |                     |                     |
	|         | --addons=cloud-spanner               |                        |         |         |                     |                     |
	|         | --addons=inspektor-gadget            |                        |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin        |                        |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano       |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	|         | --addons=ingress                     |                        |         |         |                     |                     |
	|         | --addons=ingress-dns                 |                        |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher |                        |         |         |                     |                     |
	|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/07 11:53:18
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.23.0 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1007 11:53:18.162581 1401070 out.go:345] Setting OutFile to fd 1 ...
	I1007 11:53:18.162746 1401070 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 11:53:18.162772 1401070 out.go:358] Setting ErrFile to fd 2...
	I1007 11:53:18.162791 1401070 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 11:53:18.163077 1401070 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19763-1394934/.minikube/bin
	I1007 11:53:18.163621 1401070 out.go:352] Setting JSON to false
	I1007 11:53:18.164589 1401070 start.go:129] hostinfo: {"hostname":"ip-172-31-29-130","uptime":92150,"bootTime":1728209849,"procs":157,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I1007 11:53:18.164666 1401070 start.go:139] virtualization:  
	I1007 11:53:18.167756 1401070 out.go:177] * [addons-268164] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	I1007 11:53:18.171092 1401070 out.go:177]   - MINIKUBE_LOCATION=19763
	I1007 11:53:18.171152 1401070 notify.go:220] Checking for updates...
	I1007 11:53:18.173942 1401070 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1007 11:53:18.176645 1401070 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19763-1394934/kubeconfig
	I1007 11:53:18.179398 1401070 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19763-1394934/.minikube
	I1007 11:53:18.181971 1401070 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1007 11:53:18.184478 1401070 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1007 11:53:18.187346 1401070 driver.go:394] Setting default libvirt URI to qemu:///system
	I1007 11:53:18.214873 1401070 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I1007 11:53:18.215007 1401070 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1007 11:53:18.271900 1401070 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:44 SystemTime:2024-10-07 11:53:18.261651509 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I1007 11:53:18.272015 1401070 docker.go:318] overlay module found
	I1007 11:53:18.274968 1401070 out.go:177] * Using the docker driver based on user configuration
	I1007 11:53:18.277669 1401070 start.go:297] selected driver: docker
	I1007 11:53:18.277697 1401070 start.go:901] validating driver "docker" against <nil>
	I1007 11:53:18.277713 1401070 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1007 11:53:18.278350 1401070 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1007 11:53:18.335513 1401070 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:44 SystemTime:2024-10-07 11:53:18.32605286 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: bridg
e-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I1007 11:53:18.335819 1401070 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1007 11:53:18.336051 1401070 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1007 11:53:18.338594 1401070 out.go:177] * Using Docker driver with root privileges
	I1007 11:53:18.341186 1401070 cni.go:84] Creating CNI manager for ""
	I1007 11:53:18.341255 1401070 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1007 11:53:18.341268 1401070 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I1007 11:53:18.341374 1401070 start.go:340] cluster config:
	{Name:addons-268164 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-268164 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntim
e:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSH
AuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1007 11:53:18.344219 1401070 out.go:177] * Starting "addons-268164" primary control-plane node in "addons-268164" cluster
	I1007 11:53:18.346804 1401070 cache.go:121] Beginning downloading kic base image for docker with containerd
	I1007 11:53:18.349514 1401070 out.go:177] * Pulling base image v0.0.45-1727731891-master ...
	I1007 11:53:18.352138 1401070 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime containerd
	I1007 11:53:18.352208 1401070 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19763-1394934/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-arm64.tar.lz4
	I1007 11:53:18.352223 1401070 cache.go:56] Caching tarball of preloaded images
	I1007 11:53:18.352239 1401070 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 in local docker daemon
	I1007 11:53:18.352323 1401070 preload.go:172] Found /home/jenkins/minikube-integration/19763-1394934/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1007 11:53:18.352334 1401070 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on containerd
	I1007 11:53:18.352692 1401070 profile.go:143] Saving config to /home/jenkins/minikube-integration/19763-1394934/.minikube/profiles/addons-268164/config.json ...
	I1007 11:53:18.352722 1401070 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19763-1394934/.minikube/profiles/addons-268164/config.json: {Name:mk80a4c3d606b630af5fe0e410866131d391fc35 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 11:53:18.367480 1401070 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 to local cache
	I1007 11:53:18.367632 1401070 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 in local cache directory
	I1007 11:53:18.367653 1401070 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 in local cache directory, skipping pull
	I1007 11:53:18.367657 1401070 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 exists in cache, skipping pull
	I1007 11:53:18.367665 1401070 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 as a tarball
	I1007 11:53:18.367672 1401070 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 from local cache
	I1007 11:53:35.293535 1401070 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 from cached tarball
	I1007 11:53:35.293577 1401070 cache.go:194] Successfully downloaded all kic artifacts
	I1007 11:53:35.293628 1401070 start.go:360] acquireMachinesLock for addons-268164: {Name:mk205c4725ef6b1e25cb0ac9ca84a470496b7504 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1007 11:53:35.293762 1401070 start.go:364] duration metric: took 109.864µs to acquireMachinesLock for "addons-268164"
	I1007 11:53:35.293792 1401070 start.go:93] Provisioning new machine with config: &{Name:addons-268164 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-268164 Namespace:default APIServerHAVIP: APIServerName:mi
nikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:f
alse CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1007 11:53:35.293869 1401070 start.go:125] createHost starting for "" (driver="docker")
	I1007 11:53:35.295567 1401070 out.go:235] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I1007 11:53:35.295839 1401070 start.go:159] libmachine.API.Create for "addons-268164" (driver="docker")
	I1007 11:53:35.295887 1401070 client.go:168] LocalClient.Create starting
	I1007 11:53:35.296004 1401070 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19763-1394934/.minikube/certs/ca.pem
	I1007 11:53:36.002720 1401070 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19763-1394934/.minikube/certs/cert.pem
	I1007 11:53:36.229844 1401070 cli_runner.go:164] Run: docker network inspect addons-268164 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1007 11:53:36.244738 1401070 cli_runner.go:211] docker network inspect addons-268164 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1007 11:53:36.244830 1401070 network_create.go:284] running [docker network inspect addons-268164] to gather additional debugging logs...
	I1007 11:53:36.244861 1401070 cli_runner.go:164] Run: docker network inspect addons-268164
	W1007 11:53:36.258980 1401070 cli_runner.go:211] docker network inspect addons-268164 returned with exit code 1
	I1007 11:53:36.259040 1401070 network_create.go:287] error running [docker network inspect addons-268164]: docker network inspect addons-268164: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-268164 not found
	I1007 11:53:36.259058 1401070 network_create.go:289] output of [docker network inspect addons-268164]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-268164 not found
	
	** /stderr **
	I1007 11:53:36.259175 1401070 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1007 11:53:36.275522 1401070 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x400197c430}
	I1007 11:53:36.275581 1401070 network_create.go:124] attempt to create docker network addons-268164 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1007 11:53:36.275643 1401070 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-268164 addons-268164
	I1007 11:53:36.347800 1401070 network_create.go:108] docker network addons-268164 192.168.49.0/24 created
	I1007 11:53:36.347837 1401070 kic.go:121] calculated static IP "192.168.49.2" for the "addons-268164" container
	I1007 11:53:36.347920 1401070 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1007 11:53:36.363015 1401070 cli_runner.go:164] Run: docker volume create addons-268164 --label name.minikube.sigs.k8s.io=addons-268164 --label created_by.minikube.sigs.k8s.io=true
	I1007 11:53:36.380082 1401070 oci.go:103] Successfully created a docker volume addons-268164
	I1007 11:53:36.380183 1401070 cli_runner.go:164] Run: docker run --rm --name addons-268164-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-268164 --entrypoint /usr/bin/test -v addons-268164:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 -d /var/lib
	I1007 11:53:38.405213 1401070 cli_runner.go:217] Completed: docker run --rm --name addons-268164-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-268164 --entrypoint /usr/bin/test -v addons-268164:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 -d /var/lib: (2.024987915s)
	I1007 11:53:38.405246 1401070 oci.go:107] Successfully prepared a docker volume addons-268164
	I1007 11:53:38.405265 1401070 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime containerd
	I1007 11:53:38.405285 1401070 kic.go:194] Starting extracting preloaded images to volume ...
	I1007 11:53:38.405355 1401070 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19763-1394934/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v addons-268164:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 -I lz4 -xf /preloaded.tar -C /extractDir
	I1007 11:53:42.434590 1401070 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19763-1394934/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v addons-268164:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 -I lz4 -xf /preloaded.tar -C /extractDir: (4.029193911s)
	I1007 11:53:42.434623 1401070 kic.go:203] duration metric: took 4.029335569s to extract preloaded images to volume ...
	W1007 11:53:42.434769 1401070 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1007 11:53:42.434877 1401070 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1007 11:53:42.493445 1401070 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-268164 --name addons-268164 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-268164 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-268164 --network addons-268164 --ip 192.168.49.2 --volume addons-268164:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122
	I1007 11:53:42.806057 1401070 cli_runner.go:164] Run: docker container inspect addons-268164 --format={{.State.Running}}
	I1007 11:53:42.829929 1401070 cli_runner.go:164] Run: docker container inspect addons-268164 --format={{.State.Status}}
	I1007 11:53:42.849697 1401070 cli_runner.go:164] Run: docker exec addons-268164 stat /var/lib/dpkg/alternatives/iptables
	I1007 11:53:42.910235 1401070 oci.go:144] the created container "addons-268164" has a running status.
	I1007 11:53:42.910263 1401070 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/19763-1394934/.minikube/machines/addons-268164/id_rsa...
	I1007 11:53:43.720945 1401070 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/19763-1394934/.minikube/machines/addons-268164/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1007 11:53:43.745067 1401070 cli_runner.go:164] Run: docker container inspect addons-268164 --format={{.State.Status}}
	I1007 11:53:43.766540 1401070 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1007 11:53:43.766559 1401070 kic_runner.go:114] Args: [docker exec --privileged addons-268164 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1007 11:53:43.824859 1401070 cli_runner.go:164] Run: docker container inspect addons-268164 --format={{.State.Status}}
	I1007 11:53:43.853024 1401070 machine.go:93] provisionDockerMachine start ...
	I1007 11:53:43.853114 1401070 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-268164
	I1007 11:53:43.874610 1401070 main.go:141] libmachine: Using SSH client type: native
	I1007 11:53:43.874865 1401070 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413c00] 0x416440 <nil>  [] 0s} 127.0.0.1 37896 <nil> <nil>}
	I1007 11:53:43.874875 1401070 main.go:141] libmachine: About to run SSH command:
	hostname
	I1007 11:53:44.011764 1401070 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-268164
	
	I1007 11:53:44.011829 1401070 ubuntu.go:169] provisioning hostname "addons-268164"
	I1007 11:53:44.011921 1401070 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-268164
	I1007 11:53:44.030928 1401070 main.go:141] libmachine: Using SSH client type: native
	I1007 11:53:44.031180 1401070 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413c00] 0x416440 <nil>  [] 0s} 127.0.0.1 37896 <nil> <nil>}
	I1007 11:53:44.031216 1401070 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-268164 && echo "addons-268164" | sudo tee /etc/hostname
	I1007 11:53:44.188201 1401070 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-268164
	
	I1007 11:53:44.188310 1401070 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-268164
	I1007 11:53:44.206736 1401070 main.go:141] libmachine: Using SSH client type: native
	I1007 11:53:44.206977 1401070 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413c00] 0x416440 <nil>  [] 0s} 127.0.0.1 37896 <nil> <nil>}
	I1007 11:53:44.207002 1401070 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-268164' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-268164/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-268164' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1007 11:53:44.339619 1401070 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1007 11:53:44.339646 1401070 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19763-1394934/.minikube CaCertPath:/home/jenkins/minikube-integration/19763-1394934/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19763-1394934/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19763-1394934/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19763-1394934/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19763-1394934/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19763-1394934/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19763-1394934/.minikube}
	I1007 11:53:44.339675 1401070 ubuntu.go:177] setting up certificates
	I1007 11:53:44.339686 1401070 provision.go:84] configureAuth start
	I1007 11:53:44.339754 1401070 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-268164
	I1007 11:53:44.356911 1401070 provision.go:143] copyHostCerts
	I1007 11:53:44.357002 1401070 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19763-1394934/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19763-1394934/.minikube/ca.pem (1078 bytes)
	I1007 11:53:44.357124 1401070 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19763-1394934/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19763-1394934/.minikube/cert.pem (1123 bytes)
	I1007 11:53:44.357182 1401070 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19763-1394934/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19763-1394934/.minikube/key.pem (1675 bytes)
	I1007 11:53:44.357239 1401070 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19763-1394934/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19763-1394934/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19763-1394934/.minikube/certs/ca-key.pem org=jenkins.addons-268164 san=[127.0.0.1 192.168.49.2 addons-268164 localhost minikube]
	I1007 11:53:44.937737 1401070 provision.go:177] copyRemoteCerts
	I1007 11:53:44.937809 1401070 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1007 11:53:44.937854 1401070 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-268164
	I1007 11:53:44.954375 1401070 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:37896 SSHKeyPath:/home/jenkins/minikube-integration/19763-1394934/.minikube/machines/addons-268164/id_rsa Username:docker}
	I1007 11:53:45.051431 1401070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-1394934/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1007 11:53:45.082842 1401070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-1394934/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1007 11:53:45.114500 1401070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-1394934/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1007 11:53:45.145485 1401070 provision.go:87] duration metric: took 805.784217ms to configureAuth
	I1007 11:53:45.145514 1401070 ubuntu.go:193] setting minikube options for container-runtime
	I1007 11:53:45.145735 1401070 config.go:182] Loaded profile config "addons-268164": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I1007 11:53:45.145744 1401070 machine.go:96] duration metric: took 1.292701855s to provisionDockerMachine
	I1007 11:53:45.145751 1401070 client.go:171] duration metric: took 9.84985434s to LocalClient.Create
	I1007 11:53:45.145771 1401070 start.go:167] duration metric: took 9.849937521s to libmachine.API.Create "addons-268164"
	I1007 11:53:45.145781 1401070 start.go:293] postStartSetup for "addons-268164" (driver="docker")
	I1007 11:53:45.145791 1401070 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1007 11:53:45.145851 1401070 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1007 11:53:45.145894 1401070 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-268164
	I1007 11:53:45.165926 1401070 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:37896 SSHKeyPath:/home/jenkins/minikube-integration/19763-1394934/.minikube/machines/addons-268164/id_rsa Username:docker}
	I1007 11:53:45.266809 1401070 ssh_runner.go:195] Run: cat /etc/os-release
	I1007 11:53:45.271943 1401070 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1007 11:53:45.271981 1401070 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1007 11:53:45.271997 1401070 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1007 11:53:45.272004 1401070 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I1007 11:53:45.272015 1401070 filesync.go:126] Scanning /home/jenkins/minikube-integration/19763-1394934/.minikube/addons for local assets ...
	I1007 11:53:45.272134 1401070 filesync.go:126] Scanning /home/jenkins/minikube-integration/19763-1394934/.minikube/files for local assets ...
	I1007 11:53:45.272164 1401070 start.go:296] duration metric: took 126.376952ms for postStartSetup
	I1007 11:53:45.272524 1401070 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-268164
	I1007 11:53:45.301718 1401070 profile.go:143] Saving config to /home/jenkins/minikube-integration/19763-1394934/.minikube/profiles/addons-268164/config.json ...
	I1007 11:53:45.302037 1401070 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1007 11:53:45.302088 1401070 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-268164
	I1007 11:53:45.321420 1401070 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:37896 SSHKeyPath:/home/jenkins/minikube-integration/19763-1394934/.minikube/machines/addons-268164/id_rsa Username:docker}
	I1007 11:53:45.420359 1401070 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1007 11:53:45.425014 1401070 start.go:128] duration metric: took 10.131127476s to createHost
	I1007 11:53:45.425040 1401070 start.go:83] releasing machines lock for "addons-268164", held for 10.131265229s
	I1007 11:53:45.425112 1401070 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-268164
	I1007 11:53:45.442028 1401070 ssh_runner.go:195] Run: cat /version.json
	I1007 11:53:45.442085 1401070 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-268164
	I1007 11:53:45.442332 1401070 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1007 11:53:45.442397 1401070 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-268164
	I1007 11:53:45.461608 1401070 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:37896 SSHKeyPath:/home/jenkins/minikube-integration/19763-1394934/.minikube/machines/addons-268164/id_rsa Username:docker}
	I1007 11:53:45.477624 1401070 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:37896 SSHKeyPath:/home/jenkins/minikube-integration/19763-1394934/.minikube/machines/addons-268164/id_rsa Username:docker}
	I1007 11:53:45.554990 1401070 ssh_runner.go:195] Run: systemctl --version
	I1007 11:53:45.687654 1401070 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1007 11:53:45.691955 1401070 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I1007 11:53:45.716409 1401070 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I1007 11:53:45.716522 1401070 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1007 11:53:45.746410 1401070 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I1007 11:53:45.746486 1401070 start.go:495] detecting cgroup driver to use...
	I1007 11:53:45.746534 1401070 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1007 11:53:45.746613 1401070 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1007 11:53:45.759570 1401070 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1007 11:53:45.770964 1401070 docker.go:217] disabling cri-docker service (if available) ...
	I1007 11:53:45.771071 1401070 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1007 11:53:45.785070 1401070 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1007 11:53:45.799036 1401070 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1007 11:53:45.897763 1401070 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1007 11:53:45.990260 1401070 docker.go:233] disabling docker service ...
	I1007 11:53:45.990359 1401070 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1007 11:53:46.013170 1401070 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1007 11:53:46.025847 1401070 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1007 11:53:46.125210 1401070 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1007 11:53:46.217792 1401070 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1007 11:53:46.229238 1401070 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1007 11:53:46.246710 1401070 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I1007 11:53:46.256688 1401070 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1007 11:53:46.266514 1401070 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1007 11:53:46.266634 1401070 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1007 11:53:46.276592 1401070 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1007 11:53:46.286066 1401070 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1007 11:53:46.295695 1401070 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1007 11:53:46.305274 1401070 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1007 11:53:46.314536 1401070 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1007 11:53:46.324250 1401070 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1007 11:53:46.334224 1401070 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1007 11:53:46.344285 1401070 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1007 11:53:46.352749 1401070 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1007 11:53:46.361115 1401070 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1007 11:53:46.441446 1401070 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1007 11:53:46.579448 1401070 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I1007 11:53:46.579603 1401070 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1007 11:53:46.583097 1401070 start.go:563] Will wait 60s for crictl version
	I1007 11:53:46.583201 1401070 ssh_runner.go:195] Run: which crictl
	I1007 11:53:46.586516 1401070 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1007 11:53:46.627411 1401070 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.7.22
	RuntimeApiVersion:  v1
	I1007 11:53:46.627597 1401070 ssh_runner.go:195] Run: containerd --version
	I1007 11:53:46.651825 1401070 ssh_runner.go:195] Run: containerd --version
	I1007 11:53:46.681104 1401070 out.go:177] * Preparing Kubernetes v1.31.1 on containerd 1.7.22 ...
	I1007 11:53:46.683922 1401070 cli_runner.go:164] Run: docker network inspect addons-268164 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1007 11:53:46.700146 1401070 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1007 11:53:46.703697 1401070 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1007 11:53:46.714493 1401070 kubeadm.go:883] updating cluster {Name:addons-268164 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-268164 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Cu
stomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1007 11:53:46.714608 1401070 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime containerd
	I1007 11:53:46.714670 1401070 ssh_runner.go:195] Run: sudo crictl images --output json
	I1007 11:53:46.750563 1401070 containerd.go:627] all images are preloaded for containerd runtime.
	I1007 11:53:46.750591 1401070 containerd.go:534] Images already preloaded, skipping extraction
	I1007 11:53:46.750651 1401070 ssh_runner.go:195] Run: sudo crictl images --output json
	I1007 11:53:46.786029 1401070 containerd.go:627] all images are preloaded for containerd runtime.
	I1007 11:53:46.786052 1401070 cache_images.go:84] Images are preloaded, skipping loading
	I1007 11:53:46.786059 1401070 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.31.1 containerd true true} ...
	I1007 11:53:46.786149 1401070 kubeadm.go:946] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-268164 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:addons-268164 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1007 11:53:46.786229 1401070 ssh_runner.go:195] Run: sudo crictl info
	I1007 11:53:46.826013 1401070 cni.go:84] Creating CNI manager for ""
	I1007 11:53:46.826039 1401070 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1007 11:53:46.826052 1401070 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1007 11:53:46.826073 1401070 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-268164 NodeName:addons-268164 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1007 11:53:46.826216 1401070 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "addons-268164"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1007 11:53:46.826290 1401070 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1007 11:53:46.835227 1401070 binaries.go:44] Found k8s binaries, skipping transfer
	I1007 11:53:46.835331 1401070 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1007 11:53:46.844313 1401070 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I1007 11:53:46.862013 1401070 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1007 11:53:46.880169 1401070 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2167 bytes)
	I1007 11:53:46.897947 1401070 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1007 11:53:46.901286 1401070 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1007 11:53:46.912167 1401070 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1007 11:53:46.990457 1401070 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1007 11:53:47.007999 1401070 certs.go:68] Setting up /home/jenkins/minikube-integration/19763-1394934/.minikube/profiles/addons-268164 for IP: 192.168.49.2
	I1007 11:53:47.008023 1401070 certs.go:194] generating shared ca certs ...
	I1007 11:53:47.008042 1401070 certs.go:226] acquiring lock for ca certs: {Name:mk4964dcb525e1a3c94069cf2fb52c246bc0ce74 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 11:53:47.008255 1401070 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19763-1394934/.minikube/ca.key
	I1007 11:53:47.176383 1401070 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19763-1394934/.minikube/ca.crt ...
	I1007 11:53:47.176417 1401070 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19763-1394934/.minikube/ca.crt: {Name:mk16bd186f5722ce109399a9d67478608b3c8cc4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 11:53:47.177247 1401070 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19763-1394934/.minikube/ca.key ...
	I1007 11:53:47.177265 1401070 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19763-1394934/.minikube/ca.key: {Name:mk526ba449739d8198768b29489e712d3d8d997d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 11:53:47.177415 1401070 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19763-1394934/.minikube/proxy-client-ca.key
	I1007 11:53:47.685218 1401070 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19763-1394934/.minikube/proxy-client-ca.crt ...
	I1007 11:53:47.685255 1401070 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19763-1394934/.minikube/proxy-client-ca.crt: {Name:mk521600ea3a227bfaded8d49beb478b20791c7d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 11:53:47.685452 1401070 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19763-1394934/.minikube/proxy-client-ca.key ...
	I1007 11:53:47.685465 1401070 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19763-1394934/.minikube/proxy-client-ca.key: {Name:mk0ba87663fc0ce6213107b87f81921e8c02919d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 11:53:47.685546 1401070 certs.go:256] generating profile certs ...
	I1007 11:53:47.685610 1401070 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19763-1394934/.minikube/profiles/addons-268164/client.key
	I1007 11:53:47.685636 1401070 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19763-1394934/.minikube/profiles/addons-268164/client.crt with IP's: []
	I1007 11:53:48.185887 1401070 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19763-1394934/.minikube/profiles/addons-268164/client.crt ...
	I1007 11:53:48.185921 1401070 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19763-1394934/.minikube/profiles/addons-268164/client.crt: {Name:mke528d9c03a05ca0d1c38bb36cf577d06aec444 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 11:53:48.186118 1401070 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19763-1394934/.minikube/profiles/addons-268164/client.key ...
	I1007 11:53:48.186133 1401070 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19763-1394934/.minikube/profiles/addons-268164/client.key: {Name:mkd2f90cf0eac3ddbaaa384aa283e386f5cdc46e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 11:53:48.186663 1401070 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19763-1394934/.minikube/profiles/addons-268164/apiserver.key.f04d7a8b
	I1007 11:53:48.186688 1401070 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19763-1394934/.minikube/profiles/addons-268164/apiserver.crt.f04d7a8b with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1007 11:53:48.886466 1401070 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19763-1394934/.minikube/profiles/addons-268164/apiserver.crt.f04d7a8b ...
	I1007 11:53:48.886501 1401070 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19763-1394934/.minikube/profiles/addons-268164/apiserver.crt.f04d7a8b: {Name:mk724e5aa91d990fa0495ef51b1f509663242d31 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 11:53:48.886684 1401070 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19763-1394934/.minikube/profiles/addons-268164/apiserver.key.f04d7a8b ...
	I1007 11:53:48.886699 1401070 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19763-1394934/.minikube/profiles/addons-268164/apiserver.key.f04d7a8b: {Name:mkd3658bee406578d112f0b69637eea55f566cd1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 11:53:48.886785 1401070 certs.go:381] copying /home/jenkins/minikube-integration/19763-1394934/.minikube/profiles/addons-268164/apiserver.crt.f04d7a8b -> /home/jenkins/minikube-integration/19763-1394934/.minikube/profiles/addons-268164/apiserver.crt
	I1007 11:53:48.886861 1401070 certs.go:385] copying /home/jenkins/minikube-integration/19763-1394934/.minikube/profiles/addons-268164/apiserver.key.f04d7a8b -> /home/jenkins/minikube-integration/19763-1394934/.minikube/profiles/addons-268164/apiserver.key
	I1007 11:53:48.886917 1401070 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19763-1394934/.minikube/profiles/addons-268164/proxy-client.key
	I1007 11:53:48.886941 1401070 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19763-1394934/.minikube/profiles/addons-268164/proxy-client.crt with IP's: []
	I1007 11:53:49.179101 1401070 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19763-1394934/.minikube/profiles/addons-268164/proxy-client.crt ...
	I1007 11:53:49.179133 1401070 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19763-1394934/.minikube/profiles/addons-268164/proxy-client.crt: {Name:mkb1a32c6f28970522367fe28cb9630eb7b772bd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 11:53:49.179736 1401070 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19763-1394934/.minikube/profiles/addons-268164/proxy-client.key ...
	I1007 11:53:49.179755 1401070 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19763-1394934/.minikube/profiles/addons-268164/proxy-client.key: {Name:mk3536a05cafaf2dc8c1587ae730e17bf84de0ef Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 11:53:49.179950 1401070 certs.go:484] found cert: /home/jenkins/minikube-integration/19763-1394934/.minikube/certs/ca-key.pem (1679 bytes)
	I1007 11:53:49.179993 1401070 certs.go:484] found cert: /home/jenkins/minikube-integration/19763-1394934/.minikube/certs/ca.pem (1078 bytes)
	I1007 11:53:49.180020 1401070 certs.go:484] found cert: /home/jenkins/minikube-integration/19763-1394934/.minikube/certs/cert.pem (1123 bytes)
	I1007 11:53:49.180047 1401070 certs.go:484] found cert: /home/jenkins/minikube-integration/19763-1394934/.minikube/certs/key.pem (1675 bytes)
	I1007 11:53:49.180662 1401070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-1394934/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1007 11:53:49.206031 1401070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-1394934/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1007 11:53:49.229948 1401070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-1394934/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1007 11:53:49.253465 1401070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-1394934/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1007 11:53:49.277353 1401070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-1394934/.minikube/profiles/addons-268164/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1007 11:53:49.300639 1401070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-1394934/.minikube/profiles/addons-268164/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1007 11:53:49.324286 1401070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-1394934/.minikube/profiles/addons-268164/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1007 11:53:49.347497 1401070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-1394934/.minikube/profiles/addons-268164/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1007 11:53:49.371277 1401070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-1394934/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1007 11:53:49.395337 1401070 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1007 11:53:49.413031 1401070 ssh_runner.go:195] Run: openssl version
	I1007 11:53:49.418212 1401070 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1007 11:53:49.427480 1401070 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1007 11:53:49.431894 1401070 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  7 11:53 /usr/share/ca-certificates/minikubeCA.pem
	I1007 11:53:49.432019 1401070 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1007 11:53:49.439435 1401070 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1007 11:53:49.448846 1401070 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1007 11:53:49.452929 1401070 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1007 11:53:49.453002 1401070 kubeadm.go:392] StartCluster: {Name:addons-268164 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-268164 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerName
s:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Custo
mQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1007 11:53:49.453124 1401070 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1007 11:53:49.453224 1401070 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1007 11:53:49.499308 1401070 cri.go:89] found id: ""
	I1007 11:53:49.499404 1401070 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1007 11:53:49.508148 1401070 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1007 11:53:49.516804 1401070 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1007 11:53:49.516897 1401070 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1007 11:53:49.525400 1401070 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1007 11:53:49.525420 1401070 kubeadm.go:157] found existing configuration files:
	
	I1007 11:53:49.525500 1401070 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1007 11:53:49.533805 1401070 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1007 11:53:49.533873 1401070 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1007 11:53:49.542106 1401070 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1007 11:53:49.550532 1401070 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1007 11:53:49.550624 1401070 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1007 11:53:49.558758 1401070 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1007 11:53:49.567589 1401070 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1007 11:53:49.567695 1401070 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1007 11:53:49.576098 1401070 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1007 11:53:49.584417 1401070 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1007 11:53:49.584501 1401070 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1007 11:53:49.592784 1401070 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1007 11:53:49.636979 1401070 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I1007 11:53:49.637275 1401070 kubeadm.go:310] [preflight] Running pre-flight checks
	I1007 11:53:49.655925 1401070 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I1007 11:53:49.655999 1401070 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1070-aws
	I1007 11:53:49.656038 1401070 kubeadm.go:310] OS: Linux
	I1007 11:53:49.656092 1401070 kubeadm.go:310] CGROUPS_CPU: enabled
	I1007 11:53:49.656143 1401070 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I1007 11:53:49.656193 1401070 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I1007 11:53:49.656245 1401070 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I1007 11:53:49.656296 1401070 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I1007 11:53:49.656347 1401070 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I1007 11:53:49.656399 1401070 kubeadm.go:310] CGROUPS_PIDS: enabled
	I1007 11:53:49.656450 1401070 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I1007 11:53:49.656499 1401070 kubeadm.go:310] CGROUPS_BLKIO: enabled
	I1007 11:53:49.712534 1401070 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1007 11:53:49.712650 1401070 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1007 11:53:49.712747 1401070 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1007 11:53:49.719947 1401070 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1007 11:53:49.724089 1401070 out.go:235]   - Generating certificates and keys ...
	I1007 11:53:49.724306 1401070 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1007 11:53:49.724384 1401070 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1007 11:53:50.219381 1401070 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1007 11:53:50.511343 1401070 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I1007 11:53:50.709314 1401070 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I1007 11:53:51.537678 1401070 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I1007 11:53:52.503588 1401070 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I1007 11:53:52.503967 1401070 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-268164 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1007 11:53:52.870315 1401070 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I1007 11:53:52.870691 1401070 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-268164 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1007 11:53:53.620809 1401070 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1007 11:53:53.961641 1401070 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I1007 11:53:54.443307 1401070 kubeadm.go:310] [certs] Generating "sa" key and public key
	I1007 11:53:54.443577 1401070 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1007 11:53:54.794176 1401070 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1007 11:53:55.373136 1401070 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1007 11:53:55.646731 1401070 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1007 11:53:56.384395 1401070 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1007 11:53:56.728635 1401070 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1007 11:53:56.729357 1401070 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1007 11:53:56.732295 1401070 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1007 11:53:56.735461 1401070 out.go:235]   - Booting up control plane ...
	I1007 11:53:56.735580 1401070 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1007 11:53:56.735658 1401070 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1007 11:53:56.736070 1401070 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1007 11:53:56.747617 1401070 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1007 11:53:56.753601 1401070 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1007 11:53:56.753662 1401070 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1007 11:53:56.844077 1401070 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1007 11:53:56.844197 1401070 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1007 11:53:58.341866 1401070 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.501361363s
	I1007 11:53:58.341983 1401070 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1007 11:54:03.843003 1401070 kubeadm.go:310] [api-check] The API server is healthy after 5.501422567s
	I1007 11:54:03.864661 1401070 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1007 11:54:03.881375 1401070 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1007 11:54:03.910281 1401070 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1007 11:54:03.910509 1401070 kubeadm.go:310] [mark-control-plane] Marking the node addons-268164 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1007 11:54:03.921247 1401070 kubeadm.go:310] [bootstrap-token] Using token: b0kilh.k3zsa9b0ayaz6d0x
	I1007 11:54:03.924038 1401070 out.go:235]   - Configuring RBAC rules ...
	I1007 11:54:03.924170 1401070 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1007 11:54:03.928423 1401070 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1007 11:54:03.936869 1401070 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1007 11:54:03.942618 1401070 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1007 11:54:03.946572 1401070 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1007 11:54:03.950619 1401070 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1007 11:54:04.252069 1401070 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1007 11:54:04.683835 1401070 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1007 11:54:05.252028 1401070 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1007 11:54:05.253118 1401070 kubeadm.go:310] 
	I1007 11:54:05.253188 1401070 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1007 11:54:05.253194 1401070 kubeadm.go:310] 
	I1007 11:54:05.253270 1401070 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1007 11:54:05.253275 1401070 kubeadm.go:310] 
	I1007 11:54:05.253300 1401070 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1007 11:54:05.253358 1401070 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1007 11:54:05.253407 1401070 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1007 11:54:05.253411 1401070 kubeadm.go:310] 
	I1007 11:54:05.253464 1401070 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1007 11:54:05.253469 1401070 kubeadm.go:310] 
	I1007 11:54:05.253517 1401070 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1007 11:54:05.253521 1401070 kubeadm.go:310] 
	I1007 11:54:05.253572 1401070 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1007 11:54:05.253645 1401070 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1007 11:54:05.253712 1401070 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1007 11:54:05.253716 1401070 kubeadm.go:310] 
	I1007 11:54:05.253799 1401070 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1007 11:54:05.253874 1401070 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1007 11:54:05.253878 1401070 kubeadm.go:310] 
	I1007 11:54:05.253961 1401070 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token b0kilh.k3zsa9b0ayaz6d0x \
	I1007 11:54:05.254062 1401070 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:c516bf4802c0ca88f09ee1087544b61fd7c26a4d5336f1cac8e48e5b4aea79e2 \
	I1007 11:54:05.254083 1401070 kubeadm.go:310] 	--control-plane 
	I1007 11:54:05.254088 1401070 kubeadm.go:310] 
	I1007 11:54:05.254190 1401070 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1007 11:54:05.254196 1401070 kubeadm.go:310] 
	I1007 11:54:05.254282 1401070 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token b0kilh.k3zsa9b0ayaz6d0x \
	I1007 11:54:05.254620 1401070 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:c516bf4802c0ca88f09ee1087544b61fd7c26a4d5336f1cac8e48e5b4aea79e2 
	I1007 11:54:05.259251 1401070 kubeadm.go:310] W1007 11:53:49.633495    1021 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I1007 11:54:05.259577 1401070 kubeadm.go:310] W1007 11:53:49.634494    1021 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I1007 11:54:05.259792 1401070 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1070-aws\n", err: exit status 1
	I1007 11:54:05.259902 1401070 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1007 11:54:05.259920 1401070 cni.go:84] Creating CNI manager for ""
	I1007 11:54:05.259928 1401070 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1007 11:54:05.262720 1401070 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I1007 11:54:05.265398 1401070 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1007 11:54:05.269262 1401070 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.1/kubectl ...
	I1007 11:54:05.269283 1401070 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1007 11:54:05.288360 1401070 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1007 11:54:05.557179 1401070 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1007 11:54:05.557307 1401070 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 11:54:05.557379 1401070 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-268164 minikube.k8s.io/updated_at=2024_10_07T11_54_05_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=55a088b4b31722f6a33d4d5d4ae6e59a42bb414b minikube.k8s.io/name=addons-268164 minikube.k8s.io/primary=true
	I1007 11:54:05.565232 1401070 ops.go:34] apiserver oom_adj: -16
	I1007 11:54:05.716408 1401070 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 11:54:06.216783 1401070 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 11:54:06.716551 1401070 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 11:54:07.216979 1401070 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 11:54:07.716598 1401070 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 11:54:08.217025 1401070 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 11:54:08.716556 1401070 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 11:54:09.216520 1401070 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 11:54:09.717464 1401070 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 11:54:09.817408 1401070 kubeadm.go:1113] duration metric: took 4.260143891s to wait for elevateKubeSystemPrivileges
	I1007 11:54:09.817441 1401070 kubeadm.go:394] duration metric: took 20.364442589s to StartCluster
	I1007 11:54:09.817466 1401070 settings.go:142] acquiring lock: {Name:mk92e55c8b3391b1d94595f100e47ff9f6bf1d13 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 11:54:09.818097 1401070 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19763-1394934/kubeconfig
	I1007 11:54:09.818509 1401070 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19763-1394934/kubeconfig: {Name:mkef6c987beefaa5e568c1a78e7d094f26b41d37 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 11:54:09.818729 1401070 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1007 11:54:09.818875 1401070 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1007 11:54:09.819133 1401070 config.go:182] Loaded profile config "addons-268164": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I1007 11:54:09.819176 1401070 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1007 11:54:09.819264 1401070 addons.go:69] Setting yakd=true in profile "addons-268164"
	I1007 11:54:09.819292 1401070 addons.go:234] Setting addon yakd=true in "addons-268164"
	I1007 11:54:09.819318 1401070 host.go:66] Checking if "addons-268164" exists ...
	I1007 11:54:09.819910 1401070 cli_runner.go:164] Run: docker container inspect addons-268164 --format={{.State.Status}}
	I1007 11:54:09.819974 1401070 addons.go:69] Setting inspektor-gadget=true in profile "addons-268164"
	I1007 11:54:09.820020 1401070 addons.go:234] Setting addon inspektor-gadget=true in "addons-268164"
	I1007 11:54:09.820084 1401070 host.go:66] Checking if "addons-268164" exists ...
	I1007 11:54:09.820404 1401070 addons.go:69] Setting metrics-server=true in profile "addons-268164"
	I1007 11:54:09.820428 1401070 addons.go:234] Setting addon metrics-server=true in "addons-268164"
	I1007 11:54:09.820451 1401070 host.go:66] Checking if "addons-268164" exists ...
	I1007 11:54:09.820778 1401070 cli_runner.go:164] Run: docker container inspect addons-268164 --format={{.State.Status}}
	I1007 11:54:09.820847 1401070 cli_runner.go:164] Run: docker container inspect addons-268164 --format={{.State.Status}}
	I1007 11:54:09.823186 1401070 addons.go:69] Setting cloud-spanner=true in profile "addons-268164"
	I1007 11:54:09.823475 1401070 addons.go:234] Setting addon cloud-spanner=true in "addons-268164"
	I1007 11:54:09.823781 1401070 host.go:66] Checking if "addons-268164" exists ...
	I1007 11:54:09.824013 1401070 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-268164"
	I1007 11:54:09.824040 1401070 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-268164"
	I1007 11:54:09.824074 1401070 host.go:66] Checking if "addons-268164" exists ...
	I1007 11:54:09.824609 1401070 cli_runner.go:164] Run: docker container inspect addons-268164 --format={{.State.Status}}
	I1007 11:54:09.823362 1401070 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-268164"
	I1007 11:54:09.825559 1401070 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-268164"
	I1007 11:54:09.825595 1401070 host.go:66] Checking if "addons-268164" exists ...
	I1007 11:54:09.826084 1401070 cli_runner.go:164] Run: docker container inspect addons-268164 --format={{.State.Status}}
	I1007 11:54:09.828215 1401070 addons.go:69] Setting registry=true in profile "addons-268164"
	I1007 11:54:09.828277 1401070 addons.go:234] Setting addon registry=true in "addons-268164"
	I1007 11:54:09.828321 1401070 host.go:66] Checking if "addons-268164" exists ...
	I1007 11:54:09.828818 1401070 cli_runner.go:164] Run: docker container inspect addons-268164 --format={{.State.Status}}
	I1007 11:54:09.823378 1401070 addons.go:69] Setting default-storageclass=true in profile "addons-268164"
	I1007 11:54:09.830118 1401070 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-268164"
	I1007 11:54:09.830431 1401070 cli_runner.go:164] Run: docker container inspect addons-268164 --format={{.State.Status}}
	I1007 11:54:09.831892 1401070 addons.go:69] Setting storage-provisioner=true in profile "addons-268164"
	I1007 11:54:09.831928 1401070 addons.go:234] Setting addon storage-provisioner=true in "addons-268164"
	I1007 11:54:09.831972 1401070 host.go:66] Checking if "addons-268164" exists ...
	I1007 11:54:09.832417 1401070 cli_runner.go:164] Run: docker container inspect addons-268164 --format={{.State.Status}}
	I1007 11:54:09.823384 1401070 addons.go:69] Setting gcp-auth=true in profile "addons-268164"
	I1007 11:54:09.840919 1401070 mustload.go:65] Loading cluster: addons-268164
	I1007 11:54:09.841307 1401070 config.go:182] Loaded profile config "addons-268164": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I1007 11:54:09.841597 1401070 cli_runner.go:164] Run: docker container inspect addons-268164 --format={{.State.Status}}
	I1007 11:54:09.842980 1401070 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-268164"
	I1007 11:54:09.843026 1401070 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-268164"
	I1007 11:54:09.843582 1401070 cli_runner.go:164] Run: docker container inspect addons-268164 --format={{.State.Status}}
	I1007 11:54:09.823388 1401070 addons.go:69] Setting ingress=true in profile "addons-268164"
	I1007 11:54:09.863647 1401070 addons.go:234] Setting addon ingress=true in "addons-268164"
	I1007 11:54:09.863699 1401070 host.go:66] Checking if "addons-268164" exists ...
	I1007 11:54:09.864097 1401070 addons.go:69] Setting volcano=true in profile "addons-268164"
	I1007 11:54:09.864112 1401070 addons.go:234] Setting addon volcano=true in "addons-268164"
	I1007 11:54:09.864135 1401070 host.go:66] Checking if "addons-268164" exists ...
	I1007 11:54:09.864535 1401070 cli_runner.go:164] Run: docker container inspect addons-268164 --format={{.State.Status}}
	I1007 11:54:09.868597 1401070 cli_runner.go:164] Run: docker container inspect addons-268164 --format={{.State.Status}}
	I1007 11:54:09.823392 1401070 addons.go:69] Setting ingress-dns=true in profile "addons-268164"
	I1007 11:54:09.883673 1401070 addons.go:234] Setting addon ingress-dns=true in "addons-268164"
	I1007 11:54:09.883722 1401070 host.go:66] Checking if "addons-268164" exists ...
	I1007 11:54:09.884194 1401070 cli_runner.go:164] Run: docker container inspect addons-268164 --format={{.State.Status}}
	I1007 11:54:09.885932 1401070 addons.go:69] Setting volumesnapshots=true in profile "addons-268164"
	I1007 11:54:09.885959 1401070 addons.go:234] Setting addon volumesnapshots=true in "addons-268164"
	I1007 11:54:09.885994 1401070 host.go:66] Checking if "addons-268164" exists ...
	I1007 11:54:09.886486 1401070 cli_runner.go:164] Run: docker container inspect addons-268164 --format={{.State.Status}}
	I1007 11:54:09.896011 1401070 out.go:177] * Verifying Kubernetes components...
	I1007 11:54:09.906437 1401070 cli_runner.go:164] Run: docker container inspect addons-268164 --format={{.State.Status}}
	I1007 11:54:10.008106 1401070 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1007 11:54:10.018035 1401070 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1007 11:54:10.076444 1401070 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	I1007 11:54:10.100415 1401070 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0
	I1007 11:54:10.100928 1401070 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1007 11:54:10.111025 1401070 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1007 11:54:10.111154 1401070 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-268164
	I1007 11:54:10.123644 1401070 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I1007 11:54:10.123707 1401070 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I1007 11:54:10.123817 1401070 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-268164
	I1007 11:54:10.131155 1401070 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.7
	I1007 11:54:10.143319 1401070 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1007 11:54:10.101029 1401070 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I1007 11:54:10.101134 1401070 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1007 11:54:10.143484 1401070 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1007 11:54:10.143648 1401070 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-268164
	I1007 11:54:10.144217 1401070 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.24
	I1007 11:54:10.107854 1401070 addons.go:234] Setting addon default-storageclass=true in "addons-268164"
	I1007 11:54:10.146252 1401070 host.go:66] Checking if "addons-268164" exists ...
	I1007 11:54:10.148042 1401070 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-268164"
	I1007 11:54:10.148083 1401070 host.go:66] Checking if "addons-268164" exists ...
	I1007 11:54:10.148475 1401070 cli_runner.go:164] Run: docker container inspect addons-268164 --format={{.State.Status}}
	I1007 11:54:10.151763 1401070 cli_runner.go:164] Run: docker container inspect addons-268164 --format={{.State.Status}}
	I1007 11:54:10.156440 1401070 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1007 11:54:10.156463 1401070 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1007 11:54:10.156536 1401070 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-268164
	I1007 11:54:10.157088 1401070 out.go:177]   - Using image docker.io/volcanosh/vc-controller-manager:v1.10.0
	I1007 11:54:10.157309 1401070 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I1007 11:54:10.157578 1401070 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I1007 11:54:10.157591 1401070 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1007 11:54:10.157638 1401070 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-268164
	I1007 11:54:10.159860 1401070 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I1007 11:54:10.160073 1401070 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1007 11:54:10.160087 1401070 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1007 11:54:10.160165 1401070 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-268164
	I1007 11:54:10.163459 1401070 out.go:177]   - Using image docker.io/registry:2.8.3
	I1007 11:54:10.169791 1401070 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1007 11:54:10.170958 1401070 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I1007 11:54:10.170976 1401070 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1007 11:54:10.171046 1401070 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-268164
	I1007 11:54:10.185300 1401070 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1007 11:54:10.185327 1401070 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1007 11:54:10.185396 1401070 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-268164
	I1007 11:54:10.144914 1401070 host.go:66] Checking if "addons-268164" exists ...
	I1007 11:54:10.198782 1401070 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1007 11:54:10.200418 1401070 out.go:177]   - Using image docker.io/volcanosh/vc-scheduler:v1.10.0
	I1007 11:54:10.200554 1401070 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.2
	I1007 11:54:10.201400 1401070 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1007 11:54:10.201418 1401070 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I1007 11:54:10.201501 1401070 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-268164
	I1007 11:54:10.207311 1401070 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1007 11:54:10.211876 1401070 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I1007 11:54:10.221580 1401070 out.go:177]   - Using image docker.io/volcanosh/vc-webhook-manager:v1.10.0
	I1007 11:54:10.227483 1401070 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1007 11:54:10.227519 1401070 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1007 11:54:10.227645 1401070 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-268164
	I1007 11:54:10.252548 1401070 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1007 11:54:10.258183 1401070 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1007 11:54:10.260938 1401070 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1007 11:54:10.263632 1401070 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1007 11:54:10.320221 1401070 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1007 11:54:10.320428 1401070 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:37896 SSHKeyPath:/home/jenkins/minikube-integration/19763-1394934/.minikube/machines/addons-268164/id_rsa Username:docker}
	I1007 11:54:10.362362 1401070 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1007 11:54:10.334088 1401070 addons.go:431] installing /etc/kubernetes/addons/volcano-deployment.yaml
	I1007 11:54:10.363880 1401070 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volcano-deployment.yaml (471825 bytes)
	I1007 11:54:10.363957 1401070 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-268164
	I1007 11:54:10.334191 1401070 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:37896 SSHKeyPath:/home/jenkins/minikube-integration/19763-1394934/.minikube/machines/addons-268164/id_rsa Username:docker}
	I1007 11:54:10.371312 1401070 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1007 11:54:10.354992 1401070 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1007 11:54:10.371589 1401070 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:37896 SSHKeyPath:/home/jenkins/minikube-integration/19763-1394934/.minikube/machines/addons-268164/id_rsa Username:docker}
	I1007 11:54:10.373232 1401070 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1007 11:54:10.373462 1401070 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-268164
	I1007 11:54:10.377289 1401070 out.go:177]   - Using image docker.io/busybox:stable
	I1007 11:54:10.381044 1401070 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1007 11:54:10.383938 1401070 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1007 11:54:10.383963 1401070 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1007 11:54:10.384036 1401070 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-268164
	I1007 11:54:10.373244 1401070 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1007 11:54:10.387496 1401070 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-268164
	I1007 11:54:10.420868 1401070 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:37896 SSHKeyPath:/home/jenkins/minikube-integration/19763-1394934/.minikube/machines/addons-268164/id_rsa Username:docker}
	I1007 11:54:10.421591 1401070 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:37896 SSHKeyPath:/home/jenkins/minikube-integration/19763-1394934/.minikube/machines/addons-268164/id_rsa Username:docker}
	I1007 11:54:10.440178 1401070 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:37896 SSHKeyPath:/home/jenkins/minikube-integration/19763-1394934/.minikube/machines/addons-268164/id_rsa Username:docker}
	I1007 11:54:10.483938 1401070 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:37896 SSHKeyPath:/home/jenkins/minikube-integration/19763-1394934/.minikube/machines/addons-268164/id_rsa Username:docker}
	I1007 11:54:10.485825 1401070 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:37896 SSHKeyPath:/home/jenkins/minikube-integration/19763-1394934/.minikube/machines/addons-268164/id_rsa Username:docker}
	I1007 11:54:10.503760 1401070 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:37896 SSHKeyPath:/home/jenkins/minikube-integration/19763-1394934/.minikube/machines/addons-268164/id_rsa Username:docker}
	I1007 11:54:10.515135 1401070 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:37896 SSHKeyPath:/home/jenkins/minikube-integration/19763-1394934/.minikube/machines/addons-268164/id_rsa Username:docker}
	I1007 11:54:10.517928 1401070 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:37896 SSHKeyPath:/home/jenkins/minikube-integration/19763-1394934/.minikube/machines/addons-268164/id_rsa Username:docker}
	I1007 11:54:10.539833 1401070 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:37896 SSHKeyPath:/home/jenkins/minikube-integration/19763-1394934/.minikube/machines/addons-268164/id_rsa Username:docker}
	I1007 11:54:10.540758 1401070 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:37896 SSHKeyPath:/home/jenkins/minikube-integration/19763-1394934/.minikube/machines/addons-268164/id_rsa Username:docker}
	W1007 11:54:10.542052 1401070 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1007 11:54:10.542083 1401070 retry.go:31] will retry after 208.174335ms: ssh: handshake failed: EOF
	W1007 11:54:10.542613 1401070 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1007 11:54:10.542630 1401070 retry.go:31] will retry after 247.921887ms: ssh: handshake failed: EOF
	I1007 11:54:10.552556 1401070 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:37896 SSHKeyPath:/home/jenkins/minikube-integration/19763-1394934/.minikube/machines/addons-268164/id_rsa Username:docker}
	I1007 11:54:10.650784 1401070 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1007 11:54:10.651105 1401070 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	W1007 11:54:10.793920 1401070 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1007 11:54:10.793957 1401070 retry.go:31] will retry after 230.390788ms: ssh: handshake failed: EOF
	I1007 11:54:11.260045 1401070 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1007 11:54:11.260121 1401070 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1007 11:54:11.264679 1401070 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1007 11:54:11.278599 1401070 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1007 11:54:11.316716 1401070 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I1007 11:54:11.316739 1401070 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I1007 11:54:11.370565 1401070 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1007 11:54:11.436926 1401070 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1007 11:54:11.437016 1401070 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1007 11:54:11.451956 1401070 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1007 11:54:11.452062 1401070 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1007 11:54:11.471076 1401070 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I1007 11:54:11.471158 1401070 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1007 11:54:11.484271 1401070 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1007 11:54:11.494209 1401070 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1007 11:54:11.535842 1401070 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1007 11:54:11.535927 1401070 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1007 11:54:11.593036 1401070 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I1007 11:54:11.593108 1401070 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I1007 11:54:11.619330 1401070 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1007 11:54:11.625826 1401070 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1007 11:54:11.653224 1401070 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml
	I1007 11:54:11.732795 1401070 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1007 11:54:11.732822 1401070 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1007 11:54:11.766417 1401070 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1007 11:54:11.766444 1401070 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1007 11:54:11.812155 1401070 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1007 11:54:11.812181 1401070 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1007 11:54:11.816375 1401070 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1007 11:54:11.816398 1401070 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1007 11:54:11.857276 1401070 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I1007 11:54:11.857301 1401070 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I1007 11:54:11.914094 1401070 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1007 11:54:11.914121 1401070 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1007 11:54:11.922169 1401070 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1007 11:54:11.922195 1401070 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1007 11:54:11.964821 1401070 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1007 11:54:11.964848 1401070 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1007 11:54:11.970577 1401070 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1007 11:54:12.035634 1401070 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1007 11:54:12.035662 1401070 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1007 11:54:12.053814 1401070 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I1007 11:54:12.053841 1401070 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I1007 11:54:12.106780 1401070 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1007 11:54:12.122404 1401070 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1007 11:54:12.122431 1401070 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1007 11:54:12.227036 1401070 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1007 11:54:12.227064 1401070 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1007 11:54:12.242145 1401070 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I1007 11:54:12.242173 1401070 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I1007 11:54:12.264250 1401070 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1007 11:54:12.264277 1401070 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1007 11:54:12.368323 1401070 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1007 11:54:12.497914 1401070 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.846755878s)
	I1007 11:54:12.497945 1401070 start.go:971] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1007 11:54:12.498971 1401070 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.848163325s)
	I1007 11:54:12.499710 1401070 node_ready.go:35] waiting up to 6m0s for node "addons-268164" to be "Ready" ...
	I1007 11:54:12.507102 1401070 node_ready.go:49] node "addons-268164" has status "Ready":"True"
	I1007 11:54:12.507128 1401070 node_ready.go:38] duration metric: took 7.393454ms for node "addons-268164" to be "Ready" ...
	I1007 11:54:12.507139 1401070 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1007 11:54:12.517078 1401070 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-42kp9" in "kube-system" namespace to be "Ready" ...
	I1007 11:54:12.544744 1401070 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1007 11:54:12.544771 1401070 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1007 11:54:12.589550 1401070 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1007 11:54:12.589572 1401070 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1007 11:54:12.709227 1401070 addons.go:431] installing /etc/kubernetes/addons/ig-configmap.yaml
	I1007 11:54:12.709253 1401070 ssh_runner.go:362] scp inspektor-gadget/ig-configmap.yaml --> /etc/kubernetes/addons/ig-configmap.yaml (754 bytes)
	I1007 11:54:12.825948 1401070 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1007 11:54:12.825975 1401070 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1007 11:54:12.845714 1401070 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1007 11:54:12.984889 1401070 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I1007 11:54:12.984919 1401070 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I1007 11:54:13.013832 1401070 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-268164" context rescaled to 1 replicas
	I1007 11:54:13.106131 1401070 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1007 11:54:13.106156 1401070 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1007 11:54:13.310589 1401070 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1007 11:54:13.310615 1401070 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1007 11:54:13.382168 1401070 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I1007 11:54:13.382193 1401070 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (8196 bytes)
	I1007 11:54:13.608634 1401070 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1007 11:54:13.608659 1401070 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1007 11:54:13.768317 1401070 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (2.503552823s)
	I1007 11:54:13.768386 1401070 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.489767303s)
	I1007 11:54:13.768600 1401070 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (2.397949638s)
	I1007 11:54:13.876199 1401070 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-configmap.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I1007 11:54:13.892313 1401070 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1007 11:54:13.892342 1401070 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1007 11:54:14.282788 1401070 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1007 11:54:14.282814 1401070 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1007 11:54:14.604138 1401070 pod_ready.go:103] pod "coredns-7c65d6cfc9-42kp9" in "kube-system" namespace has status "Ready":"False"
	I1007 11:54:14.872668 1401070 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1007 11:54:15.290410 1401070 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (3.80604618s)
	I1007 11:54:17.064738 1401070 pod_ready.go:103] pod "coredns-7c65d6cfc9-42kp9" in "kube-system" namespace has status "Ready":"False"
	I1007 11:54:17.408632 1401070 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1007 11:54:17.408711 1401070 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-268164
	I1007 11:54:17.450385 1401070 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:37896 SSHKeyPath:/home/jenkins/minikube-integration/19763-1394934/.minikube/machines/addons-268164/id_rsa Username:docker}
	I1007 11:54:18.099735 1401070 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1007 11:54:18.223101 1401070 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (6.728801662s)
	I1007 11:54:18.223136 1401070 addons.go:475] Verifying addon ingress=true in "addons-268164"
	I1007 11:54:18.223289 1401070 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (6.603889166s)
	I1007 11:54:18.223350 1401070 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (6.597453447s)
	I1007 11:54:18.224274 1401070 out.go:177] * Verifying ingress addon...
	I1007 11:54:18.226012 1401070 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1007 11:54:18.232383 1401070 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1007 11:54:18.232408 1401070 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:54:18.301528 1401070 addons.go:234] Setting addon gcp-auth=true in "addons-268164"
	I1007 11:54:18.301640 1401070 host.go:66] Checking if "addons-268164" exists ...
	I1007 11:54:18.302229 1401070 cli_runner.go:164] Run: docker container inspect addons-268164 --format={{.State.Status}}
	I1007 11:54:18.326767 1401070 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1007 11:54:18.326826 1401070 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-268164
	I1007 11:54:18.366862 1401070 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:37896 SSHKeyPath:/home/jenkins/minikube-integration/19763-1394934/.minikube/machines/addons-268164/id_rsa Username:docker}
	I1007 11:54:18.732303 1401070 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:54:19.232815 1401070 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:54:19.568053 1401070 pod_ready.go:103] pod "coredns-7c65d6cfc9-42kp9" in "kube-system" namespace has status "Ready":"False"
	I1007 11:54:19.733294 1401070 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:54:20.241421 1401070 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:54:20.584037 1401070 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml: (8.930771005s)
	I1007 11:54:20.584105 1401070 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (8.613497723s)
	I1007 11:54:20.584122 1401070 addons.go:475] Verifying addon registry=true in "addons-268164"
	I1007 11:54:20.584313 1401070 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (8.477495433s)
	I1007 11:54:20.584330 1401070 addons.go:475] Verifying addon metrics-server=true in "addons-268164"
	I1007 11:54:20.584367 1401070 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (8.216017796s)
	I1007 11:54:20.584701 1401070 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (7.738953494s)
	W1007 11:54:20.584736 1401070 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1007 11:54:20.584755 1401070 retry.go:31] will retry after 159.569768ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1007 11:54:20.584830 1401070 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-configmap.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (6.70859773s)
	I1007 11:54:20.587675 1401070 out.go:177] * Verifying registry addon...
	I1007 11:54:20.589651 1401070 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-268164 service yakd-dashboard -n yakd-dashboard
	
	I1007 11:54:20.593829 1401070 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1007 11:54:20.604692 1401070 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1007 11:54:20.604718 1401070 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:54:20.745026 1401070 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1007 11:54:20.802983 1401070 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:54:21.101737 1401070 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:54:21.260783 1401070 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:54:21.365804 1401070 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (6.493084325s)
	I1007 11:54:21.365908 1401070 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-268164"
	I1007 11:54:21.366105 1401070 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (3.039314615s)
	I1007 11:54:21.369544 1401070 out.go:177] * Verifying csi-hostpath-driver addon...
	I1007 11:54:21.369693 1401070 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I1007 11:54:21.373423 1401070 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I1007 11:54:21.373748 1401070 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1007 11:54:21.376521 1401070 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1007 11:54:21.376638 1401070 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1007 11:54:21.378251 1401070 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1007 11:54:21.378270 1401070 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:54:21.434944 1401070 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1007 11:54:21.434966 1401070 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1007 11:54:21.478232 1401070 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1007 11:54:21.478302 1401070 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1007 11:54:21.512294 1401070 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1007 11:54:21.598871 1401070 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:54:21.750749 1401070 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:54:21.880262 1401070 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:54:22.025469 1401070 pod_ready.go:103] pod "coredns-7c65d6cfc9-42kp9" in "kube-system" namespace has status "Ready":"False"
	I1007 11:54:22.099287 1401070 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:54:22.231780 1401070 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:54:22.339292 1401070 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.594216861s)
	I1007 11:54:22.379752 1401070 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:54:22.603198 1401070 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:54:22.649919 1401070 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.137513138s)
	I1007 11:54:22.652893 1401070 addons.go:475] Verifying addon gcp-auth=true in "addons-268164"
	I1007 11:54:22.655930 1401070 out.go:177] * Verifying gcp-auth addon...
	I1007 11:54:22.659682 1401070 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1007 11:54:22.702439 1401070 kapi.go:86] Found 0 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1007 11:54:22.730447 1401070 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:54:22.879633 1401070 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:54:23.098865 1401070 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:54:23.231521 1401070 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:54:23.379355 1401070 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:54:23.599285 1401070 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:54:23.800536 1401070 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:54:23.901781 1401070 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:54:24.097789 1401070 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:54:24.230092 1401070 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:54:24.379127 1401070 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:54:24.523265 1401070 pod_ready.go:103] pod "coredns-7c65d6cfc9-42kp9" in "kube-system" namespace has status "Ready":"False"
	I1007 11:54:24.598287 1401070 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:54:24.730897 1401070 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:54:24.878901 1401070 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:54:25.099344 1401070 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:54:25.231000 1401070 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:54:25.378451 1401070 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:54:25.599750 1401070 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:54:25.730647 1401070 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:54:25.880837 1401070 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:54:26.025659 1401070 pod_ready.go:93] pod "coredns-7c65d6cfc9-42kp9" in "kube-system" namespace has status "Ready":"True"
	I1007 11:54:26.025787 1401070 pod_ready.go:82] duration metric: took 13.508668391s for pod "coredns-7c65d6cfc9-42kp9" in "kube-system" namespace to be "Ready" ...
	I1007 11:54:26.025813 1401070 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-4hrgs" in "kube-system" namespace to be "Ready" ...
	I1007 11:54:26.028774 1401070 pod_ready.go:98] error getting pod "coredns-7c65d6cfc9-4hrgs" in "kube-system" namespace (skipping!): pods "coredns-7c65d6cfc9-4hrgs" not found
	I1007 11:54:26.028853 1401070 pod_ready.go:82] duration metric: took 3.003321ms for pod "coredns-7c65d6cfc9-4hrgs" in "kube-system" namespace to be "Ready" ...
	E1007 11:54:26.028880 1401070 pod_ready.go:67] WaitExtra: waitPodCondition: error getting pod "coredns-7c65d6cfc9-4hrgs" in "kube-system" namespace (skipping!): pods "coredns-7c65d6cfc9-4hrgs" not found
	I1007 11:54:26.028916 1401070 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-268164" in "kube-system" namespace to be "Ready" ...
	I1007 11:54:26.035597 1401070 pod_ready.go:93] pod "etcd-addons-268164" in "kube-system" namespace has status "Ready":"True"
	I1007 11:54:26.035670 1401070 pod_ready.go:82] duration metric: took 6.72769ms for pod "etcd-addons-268164" in "kube-system" namespace to be "Ready" ...
	I1007 11:54:26.035700 1401070 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-268164" in "kube-system" namespace to be "Ready" ...
	I1007 11:54:26.042015 1401070 pod_ready.go:93] pod "kube-apiserver-addons-268164" in "kube-system" namespace has status "Ready":"True"
	I1007 11:54:26.042084 1401070 pod_ready.go:82] duration metric: took 6.364524ms for pod "kube-apiserver-addons-268164" in "kube-system" namespace to be "Ready" ...
	I1007 11:54:26.042113 1401070 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-268164" in "kube-system" namespace to be "Ready" ...
	I1007 11:54:26.048840 1401070 pod_ready.go:93] pod "kube-controller-manager-addons-268164" in "kube-system" namespace has status "Ready":"True"
	I1007 11:54:26.048912 1401070 pod_ready.go:82] duration metric: took 6.778544ms for pod "kube-controller-manager-addons-268164" in "kube-system" namespace to be "Ready" ...
	I1007 11:54:26.048941 1401070 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-nrcbk" in "kube-system" namespace to be "Ready" ...
	I1007 11:54:26.098356 1401070 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:54:26.221952 1401070 pod_ready.go:93] pod "kube-proxy-nrcbk" in "kube-system" namespace has status "Ready":"True"
	I1007 11:54:26.222028 1401070 pod_ready.go:82] duration metric: took 173.067504ms for pod "kube-proxy-nrcbk" in "kube-system" namespace to be "Ready" ...
	I1007 11:54:26.222055 1401070 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-268164" in "kube-system" namespace to be "Ready" ...
	I1007 11:54:26.231213 1401070 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:54:26.378959 1401070 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:54:26.598800 1401070 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:54:26.620881 1401070 pod_ready.go:93] pod "kube-scheduler-addons-268164" in "kube-system" namespace has status "Ready":"True"
	I1007 11:54:26.620902 1401070 pod_ready.go:82] duration metric: took 398.827398ms for pod "kube-scheduler-addons-268164" in "kube-system" namespace to be "Ready" ...
	I1007 11:54:26.620911 1401070 pod_ready.go:39] duration metric: took 14.113760112s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1007 11:54:26.620926 1401070 api_server.go:52] waiting for apiserver process to appear ...
	I1007 11:54:26.620987 1401070 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 11:54:26.637349 1401070 api_server.go:72] duration metric: took 16.818577831s to wait for apiserver process to appear ...
	I1007 11:54:26.637371 1401070 api_server.go:88] waiting for apiserver healthz status ...
	I1007 11:54:26.637393 1401070 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1007 11:54:26.645268 1401070 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1007 11:54:26.646320 1401070 api_server.go:141] control plane version: v1.31.1
	I1007 11:54:26.646408 1401070 api_server.go:131] duration metric: took 9.028751ms to wait for apiserver health ...
	I1007 11:54:26.646433 1401070 system_pods.go:43] waiting for kube-system pods to appear ...
	I1007 11:54:26.729668 1401070 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:54:26.826722 1401070 system_pods.go:59] 18 kube-system pods found
	I1007 11:54:26.826801 1401070 system_pods.go:61] "coredns-7c65d6cfc9-42kp9" [c3a4bc00-1e29-4e75-9e67-b9b8f8af2ed3] Running
	I1007 11:54:26.826826 1401070 system_pods.go:61] "csi-hostpath-attacher-0" [e80be59c-3e5d-4b6b-9f76-ce4c6a467429] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1007 11:54:26.826848 1401070 system_pods.go:61] "csi-hostpath-resizer-0" [7f14db47-2624-4772-a861-aef21f35ecbd] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1007 11:54:26.826884 1401070 system_pods.go:61] "csi-hostpathplugin-fgt4c" [c7e9bb2f-0c00-4de4-ac4e-bbee9e265d9b] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1007 11:54:26.826910 1401070 system_pods.go:61] "etcd-addons-268164" [43790ca8-2826-473f-b40e-8f76849055ad] Running
	I1007 11:54:26.826929 1401070 system_pods.go:61] "kindnet-69x4j" [a22fec63-5f98-4708-ac4f-88a5499e582d] Running
	I1007 11:54:26.826944 1401070 system_pods.go:61] "kube-apiserver-addons-268164" [e726996d-573b-4e7f-ac80-830396ab2daf] Running
	I1007 11:54:26.826963 1401070 system_pods.go:61] "kube-controller-manager-addons-268164" [c0e039cf-62a4-4e6d-82e4-2ca392298ce9] Running
	I1007 11:54:26.826992 1401070 system_pods.go:61] "kube-ingress-dns-minikube" [277d18dd-919b-49b0-8475-b7d996748af8] Running
	I1007 11:54:26.827018 1401070 system_pods.go:61] "kube-proxy-nrcbk" [70009dfc-32f7-4cf0-9d50-397215a8be31] Running
	I1007 11:54:26.827036 1401070 system_pods.go:61] "kube-scheduler-addons-268164" [faae957c-467f-46bf-b612-a0b4eca05ff6] Running
	I1007 11:54:26.827058 1401070 system_pods.go:61] "metrics-server-84c5f94fbc-lt7q4" [17f82f57-6954-417d-a932-533586c9d8e1] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1007 11:54:26.827079 1401070 system_pods.go:61] "nvidia-device-plugin-daemonset-c95fk" [6b3a7b2c-7ac7-4afb-96d5-5f58856d2ce2] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1007 11:54:26.827116 1401070 system_pods.go:61] "registry-66c9cd494c-c2dt2" [acd6eaf5-969e-4672-988f-259e8dceaa8f] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1007 11:54:26.827136 1401070 system_pods.go:61] "registry-proxy-fs9v5" [7ec0748d-37a9-42c3-a336-49194895a61c] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1007 11:54:26.827159 1401070 system_pods.go:61] "snapshot-controller-56fcc65765-bgvc8" [4b2f6c57-cd33-4a7d-91ae-1288b5687367] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1007 11:54:26.827190 1401070 system_pods.go:61] "snapshot-controller-56fcc65765-sj6bv" [37a9c272-9edc-4e2a-be8a-db6bf1c8bec5] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1007 11:54:26.827211 1401070 system_pods.go:61] "storage-provisioner" [06cf396a-4cc0-4c36-9d2c-bc6981479528] Running
	I1007 11:54:26.827230 1401070 system_pods.go:74] duration metric: took 180.779283ms to wait for pod list to return data ...
	I1007 11:54:26.827250 1401070 default_sa.go:34] waiting for default service account to be created ...
	I1007 11:54:26.878924 1401070 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:54:27.021466 1401070 default_sa.go:45] found service account: "default"
	I1007 11:54:27.021537 1401070 default_sa.go:55] duration metric: took 194.267909ms for default service account to be created ...
	I1007 11:54:27.021565 1401070 system_pods.go:116] waiting for k8s-apps to be running ...
	I1007 11:54:27.099161 1401070 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:54:27.230304 1401070 system_pods.go:86] 18 kube-system pods found
	I1007 11:54:27.230338 1401070 system_pods.go:89] "coredns-7c65d6cfc9-42kp9" [c3a4bc00-1e29-4e75-9e67-b9b8f8af2ed3] Running
	I1007 11:54:27.230350 1401070 system_pods.go:89] "csi-hostpath-attacher-0" [e80be59c-3e5d-4b6b-9f76-ce4c6a467429] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1007 11:54:27.230357 1401070 system_pods.go:89] "csi-hostpath-resizer-0" [7f14db47-2624-4772-a861-aef21f35ecbd] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1007 11:54:27.230366 1401070 system_pods.go:89] "csi-hostpathplugin-fgt4c" [c7e9bb2f-0c00-4de4-ac4e-bbee9e265d9b] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1007 11:54:27.230372 1401070 system_pods.go:89] "etcd-addons-268164" [43790ca8-2826-473f-b40e-8f76849055ad] Running
	I1007 11:54:27.230377 1401070 system_pods.go:89] "kindnet-69x4j" [a22fec63-5f98-4708-ac4f-88a5499e582d] Running
	I1007 11:54:27.230381 1401070 system_pods.go:89] "kube-apiserver-addons-268164" [e726996d-573b-4e7f-ac80-830396ab2daf] Running
	I1007 11:54:27.230391 1401070 system_pods.go:89] "kube-controller-manager-addons-268164" [c0e039cf-62a4-4e6d-82e4-2ca392298ce9] Running
	I1007 11:54:27.230396 1401070 system_pods.go:89] "kube-ingress-dns-minikube" [277d18dd-919b-49b0-8475-b7d996748af8] Running
	I1007 11:54:27.230406 1401070 system_pods.go:89] "kube-proxy-nrcbk" [70009dfc-32f7-4cf0-9d50-397215a8be31] Running
	I1007 11:54:27.230410 1401070 system_pods.go:89] "kube-scheduler-addons-268164" [faae957c-467f-46bf-b612-a0b4eca05ff6] Running
	I1007 11:54:27.230416 1401070 system_pods.go:89] "metrics-server-84c5f94fbc-lt7q4" [17f82f57-6954-417d-a932-533586c9d8e1] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1007 11:54:27.230428 1401070 system_pods.go:89] "nvidia-device-plugin-daemonset-c95fk" [6b3a7b2c-7ac7-4afb-96d5-5f58856d2ce2] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1007 11:54:27.230435 1401070 system_pods.go:89] "registry-66c9cd494c-c2dt2" [acd6eaf5-969e-4672-988f-259e8dceaa8f] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1007 11:54:27.230444 1401070 system_pods.go:89] "registry-proxy-fs9v5" [7ec0748d-37a9-42c3-a336-49194895a61c] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1007 11:54:27.230451 1401070 system_pods.go:89] "snapshot-controller-56fcc65765-bgvc8" [4b2f6c57-cd33-4a7d-91ae-1288b5687367] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1007 11:54:27.230462 1401070 system_pods.go:89] "snapshot-controller-56fcc65765-sj6bv" [37a9c272-9edc-4e2a-be8a-db6bf1c8bec5] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1007 11:54:27.230466 1401070 system_pods.go:89] "storage-provisioner" [06cf396a-4cc0-4c36-9d2c-bc6981479528] Running
	I1007 11:54:27.230480 1401070 system_pods.go:126] duration metric: took 208.896084ms to wait for k8s-apps to be running ...
	I1007 11:54:27.230493 1401070 system_svc.go:44] waiting for kubelet service to be running ....
	I1007 11:54:27.230551 1401070 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1007 11:54:27.236244 1401070 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:54:27.245286 1401070 system_svc.go:56] duration metric: took 14.784323ms WaitForService to wait for kubelet
	I1007 11:54:27.245317 1401070 kubeadm.go:582] duration metric: took 17.426552056s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1007 11:54:27.245337 1401070 node_conditions.go:102] verifying NodePressure condition ...
	I1007 11:54:27.390050 1401070 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:54:27.421677 1401070 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1007 11:54:27.421714 1401070 node_conditions.go:123] node cpu capacity is 2
	I1007 11:54:27.421727 1401070 node_conditions.go:105] duration metric: took 176.384606ms to run NodePressure ...
	I1007 11:54:27.421741 1401070 start.go:241] waiting for startup goroutines ...
	I1007 11:54:27.597835 1401070 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:54:27.747859 1401070 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:54:27.918326 1401070 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:54:28.104056 1401070 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:54:28.230713 1401070 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:54:28.379286 1401070 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:54:28.598750 1401070 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:54:28.731616 1401070 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:54:28.881779 1401070 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:54:29.099492 1401070 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:54:29.229940 1401070 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:54:29.378620 1401070 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:54:29.598211 1401070 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:54:29.732820 1401070 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:54:29.879163 1401070 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:54:30.099175 1401070 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:54:30.230937 1401070 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:54:30.378854 1401070 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:54:30.597885 1401070 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:54:30.730889 1401070 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:54:30.879126 1401070 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:54:31.097512 1401070 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:54:31.230394 1401070 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:54:31.378864 1401070 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:54:31.597874 1401070 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:54:31.731483 1401070 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:54:31.890725 1401070 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:54:32.098473 1401070 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:54:32.234128 1401070 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:54:32.380491 1401070 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:54:32.598644 1401070 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:54:32.733384 1401070 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:54:32.878895 1401070 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:54:33.134340 1401070 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:54:33.230802 1401070 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:54:33.378586 1401070 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:54:33.599189 1401070 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:54:33.731059 1401070 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:54:33.878565 1401070 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:54:34.099434 1401070 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:54:34.232288 1401070 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:54:34.379783 1401070 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:54:34.597712 1401070 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:54:34.730422 1401070 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:54:34.879827 1401070 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:54:35.097740 1401070 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:54:35.231393 1401070 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:54:35.379460 1401070 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:54:35.598700 1401070 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:54:35.730247 1401070 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:54:35.879834 1401070 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:54:36.098223 1401070 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:54:36.231706 1401070 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:54:36.378461 1401070 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:54:36.598901 1401070 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:54:36.730965 1401070 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:54:36.877800 1401070 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:54:37.098196 1401070 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:54:37.229750 1401070 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:54:37.378461 1401070 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:54:37.597652 1401070 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:54:37.770917 1401070 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:54:37.878392 1401070 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:54:38.098359 1401070 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:54:38.230230 1401070 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:54:38.378377 1401070 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:54:38.598264 1401070 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:54:38.730019 1401070 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:54:38.879581 1401070 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:54:39.098791 1401070 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:54:39.231636 1401070 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:54:39.378655 1401070 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:54:39.598626 1401070 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:54:39.731437 1401070 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:54:39.879562 1401070 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:54:40.098186 1401070 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:54:40.230927 1401070 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:54:40.378919 1401070 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:54:40.598252 1401070 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:54:40.733291 1401070 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:54:40.879269 1401070 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:54:41.098714 1401070 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:54:41.230669 1401070 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:54:41.378341 1401070 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:54:41.597985 1401070 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:54:41.730588 1401070 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:54:41.878622 1401070 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:54:42.101890 1401070 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:54:42.231692 1401070 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:54:42.378987 1401070 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:54:42.598975 1401070 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:54:42.732156 1401070 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:54:42.879388 1401070 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:54:43.101886 1401070 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:54:43.239991 1401070 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:54:43.382307 1401070 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:54:43.599593 1401070 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:54:43.733513 1401070 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:54:43.897952 1401070 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:54:44.097785 1401070 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:54:44.231716 1401070 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:54:44.379262 1401070 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:54:44.600410 1401070 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:54:44.734347 1401070 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:54:44.881307 1401070 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:54:45.099554 1401070 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:54:45.231462 1401070 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:54:45.379357 1401070 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:54:45.598183 1401070 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:54:45.731296 1401070 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:54:45.878963 1401070 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:54:46.097734 1401070 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:54:46.267328 1401070 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:54:46.385338 1401070 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:54:46.598672 1401070 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:54:46.732662 1401070 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:54:46.878712 1401070 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:54:47.098501 1401070 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:54:47.230516 1401070 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:54:47.379746 1401070 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:54:47.598119 1401070 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:54:47.730773 1401070 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:54:47.878026 1401070 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:54:48.098066 1401070 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:54:48.230292 1401070 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:54:48.377978 1401070 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:54:48.597715 1401070 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:54:48.730531 1401070 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:54:48.879272 1401070 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:54:49.097395 1401070 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:54:49.230588 1401070 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:54:49.379115 1401070 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:54:49.598733 1401070 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:54:49.731197 1401070 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:54:49.879292 1401070 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:54:50.099098 1401070 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:54:50.230952 1401070 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:54:50.378401 1401070 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:54:50.598443 1401070 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:54:50.729989 1401070 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:54:50.881083 1401070 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:54:51.098264 1401070 kapi.go:107] duration metric: took 30.504431479s to wait for kubernetes.io/minikube-addons=registry ...
	I1007 11:54:51.230912 1401070 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:54:51.378665 1401070 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:54:51.730500 1401070 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:54:51.878813 1401070 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:54:52.264666 1401070 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:54:52.379513 1401070 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:54:52.730455 1401070 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:54:52.879413 1401070 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:54:53.232354 1401070 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:54:53.381770 1401070 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:54:53.764768 1401070 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:54:53.878487 1401070 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:54:54.230937 1401070 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:54:54.378247 1401070 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:54:54.765882 1401070 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:54:54.879051 1401070 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:54:55.230739 1401070 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:54:55.378665 1401070 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:54:55.731032 1401070 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:54:55.878794 1401070 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:54:56.230410 1401070 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:54:56.379485 1401070 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:54:56.731165 1401070 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:54:56.878841 1401070 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:54:57.266228 1401070 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:54:57.378334 1401070 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:54:57.734260 1401070 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:54:57.880630 1401070 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:54:58.230423 1401070 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:54:58.379033 1401070 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:54:58.731130 1401070 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:54:58.883751 1401070 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:54:59.230715 1401070 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:54:59.378192 1401070 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:54:59.732653 1401070 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:54:59.879144 1401070 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:55:00.261596 1401070 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:55:00.381065 1401070 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:55:00.812205 1401070 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:55:00.879286 1401070 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:55:01.230283 1401070 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:55:01.382498 1401070 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:55:01.732689 1401070 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:55:01.878393 1401070 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:55:02.269962 1401070 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:55:02.378703 1401070 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:55:02.764698 1401070 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:55:02.878714 1401070 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:55:03.230220 1401070 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:55:03.393974 1401070 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:55:03.730996 1401070 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:55:03.882306 1401070 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:55:04.230640 1401070 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:55:04.378396 1401070 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:55:04.731573 1401070 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:55:04.880406 1401070 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:55:05.230594 1401070 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:55:05.379350 1401070 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:55:05.736548 1401070 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:55:05.879855 1401070 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:55:06.231149 1401070 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:55:06.379092 1401070 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:55:06.730640 1401070 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:55:06.879592 1401070 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:55:07.230163 1401070 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:55:07.381572 1401070 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:55:07.733354 1401070 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:55:07.879016 1401070 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:55:08.271330 1401070 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:55:08.379128 1401070 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:55:08.730823 1401070 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:55:08.878414 1401070 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:55:09.230629 1401070 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:55:09.378234 1401070 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:55:09.731472 1401070 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:55:09.878636 1401070 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:55:10.232060 1401070 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:55:10.378673 1401070 kapi.go:107] duration metric: took 49.004924348s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1007 11:55:10.734109 1401070 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:55:11.241098 1401070 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:55:11.730629 1401070 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:55:12.238238 1401070 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:55:12.731802 1401070 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:55:13.236236 1401070 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:55:13.730248 1401070 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:55:14.230856 1401070 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:55:14.731834 1401070 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:55:15.231395 1401070 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:55:15.731013 1401070 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:55:16.265484 1401070 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:55:16.730997 1401070 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:55:17.230376 1401070 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:55:17.730345 1401070 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:55:18.230995 1401070 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:55:18.730030 1401070 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:55:19.230627 1401070 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:55:19.730278 1401070 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:55:20.230553 1401070 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:55:20.730313 1401070 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:55:21.230336 1401070 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:55:21.730153 1401070 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:55:22.230281 1401070 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:55:22.731014 1401070 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:55:23.231418 1401070 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:55:23.730768 1401070 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:55:24.265578 1401070 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:55:24.730958 1401070 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:55:25.231031 1401070 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:55:25.766527 1401070 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:55:26.230887 1401070 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:55:26.767344 1401070 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:55:27.232929 1401070 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:55:27.730503 1401070 kapi.go:107] duration metric: took 1m9.504486625s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1007 11:55:45.680025 1401070 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1007 11:55:45.680051 1401070 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:55:46.164142 1401070 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:55:46.663101 1401070 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:55:47.164289 1401070 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:55:47.664031 1401070 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:55:48.163681 1401070 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:55:48.662768 1401070 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:55:49.163705 1401070 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:55:49.663780 1401070 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:55:50.163512 1401070 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:55:50.663088 1401070 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:55:51.163447 1401070 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:55:51.663855 1401070 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:55:52.163661 1401070 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:55:52.663356 1401070 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:55:53.164024 1401070 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:55:53.663840 1401070 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:55:54.163937 1401070 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:55:54.663353 1401070 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:55:55.163808 1401070 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:55:55.663770 1401070 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:55:56.163930 1401070 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:55:56.663864 1401070 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:55:57.163345 1401070 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:55:57.666464 1401070 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:55:58.163698 1401070 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:55:58.663906 1401070 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:55:59.164276 1401070 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:55:59.663841 1401070 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:56:00.166763 1401070 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:56:00.663571 1401070 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:56:01.164152 1401070 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:56:01.664796 1401070 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:56:02.164148 1401070 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:56:02.663573 1401070 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:56:03.163423 1401070 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:56:03.662798 1401070 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:56:04.164210 1401070 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:56:04.663269 1401070 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:56:05.164827 1401070 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:56:05.663165 1401070 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:56:06.165585 1401070 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:56:06.663848 1401070 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:56:07.163469 1401070 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:56:07.667863 1401070 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:56:08.163435 1401070 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:56:08.662838 1401070 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:56:09.163973 1401070 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:56:09.663451 1401070 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:56:10.163783 1401070 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:56:10.663931 1401070 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:56:11.164324 1401070 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:56:11.663192 1401070 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:56:12.164359 1401070 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:56:12.662838 1401070 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:56:13.164238 1401070 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:56:13.664238 1401070 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:56:14.164531 1401070 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:56:14.663245 1401070 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:56:15.169349 1401070 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:56:15.664090 1401070 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:56:16.164248 1401070 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:56:16.662800 1401070 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:56:17.163940 1401070 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:56:17.668614 1401070 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:56:18.163467 1401070 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:56:18.663120 1401070 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:56:19.173035 1401070 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:56:19.663779 1401070 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:56:20.164075 1401070 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:56:20.664069 1401070 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:56:21.164035 1401070 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:56:21.664049 1401070 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:56:22.164015 1401070 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:56:22.664310 1401070 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:56:23.164368 1401070 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:56:23.663107 1401070 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:56:24.169911 1401070 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:56:24.663680 1401070 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:56:25.166301 1401070 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:56:25.670509 1401070 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:56:26.163252 1401070 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:56:26.663655 1401070 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:56:27.163240 1401070 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:56:27.667585 1401070 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:56:28.163243 1401070 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:56:28.663742 1401070 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:56:29.163423 1401070 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:56:29.664285 1401070 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:56:30.164769 1401070 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:56:30.663441 1401070 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:56:31.163651 1401070 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:56:31.664155 1401070 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:56:32.163317 1401070 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:56:32.663736 1401070 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:56:33.163502 1401070 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:56:33.663719 1401070 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:56:34.164270 1401070 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:56:34.663967 1401070 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:56:35.163437 1401070 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:56:35.663957 1401070 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:56:36.163212 1401070 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:56:36.663790 1401070 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:56:37.163133 1401070 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:56:37.667886 1401070 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:56:38.165337 1401070 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:56:38.662821 1401070 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:56:39.163276 1401070 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:56:39.664095 1401070 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:56:40.163346 1401070 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:56:40.662883 1401070 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:56:41.163809 1401070 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:56:41.664086 1401070 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:56:42.163965 1401070 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:56:42.663006 1401070 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:56:43.163691 1401070 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:56:43.663596 1401070 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:56:44.163336 1401070 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:56:44.663930 1401070 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:56:45.164688 1401070 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:56:45.663589 1401070 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:56:46.163478 1401070 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:56:46.663503 1401070 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:56:47.163256 1401070 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:56:47.668725 1401070 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:56:48.163314 1401070 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:56:48.663222 1401070 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:56:49.162683 1401070 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:56:49.663083 1401070 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:56:50.164459 1401070 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:56:50.663172 1401070 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:56:51.164130 1401070 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:56:51.663926 1401070 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:56:52.163910 1401070 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:56:52.664630 1401070 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:56:53.163308 1401070 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:56:53.668813 1401070 kapi.go:107] duration metric: took 2m31.009126959s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1007 11:56:53.671340 1401070 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-268164 cluster.
	I1007 11:56:53.674270 1401070 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1007 11:56:53.676559 1401070 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1007 11:56:53.678532 1401070 out.go:177] * Enabled addons: ingress-dns, nvidia-device-plugin, default-storageclass, storage-provisioner, cloud-spanner, storage-provisioner-rancher, volcano, metrics-server, inspektor-gadget, yakd, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I1007 11:56:53.680686 1401070 addons.go:510] duration metric: took 2m43.861497877s for enable addons: enabled=[ingress-dns nvidia-device-plugin default-storageclass storage-provisioner cloud-spanner storage-provisioner-rancher volcano metrics-server inspektor-gadget yakd volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I1007 11:56:53.680746 1401070 start.go:246] waiting for cluster config update ...
	I1007 11:56:53.680767 1401070 start.go:255] writing updated cluster config ...
	I1007 11:56:53.681086 1401070 ssh_runner.go:195] Run: rm -f paused
	I1007 11:56:54.045828 1401070 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I1007 11:56:54.047338 1401070 out.go:177] * Done! kubectl is now configured to use "addons-268164" cluster and "default" namespace by default
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                                     ATTEMPT             POD ID              POD
	64df20c5f5b41       6ef582f3ec844       3 minutes ago       Running             gcp-auth                                 0                   aab5d1e7bed42       gcp-auth-89d5ffd79-cztkh
	6a99941d1d5b4       1a9605c872c1d       4 minutes ago       Running             admission                                0                   b2a78a470c680       volcano-admission-5874dfdd79-8sxc6
	c28eb9c4381ce       289a818c8d9c5       4 minutes ago       Running             controller                               0                   a7da0634402b4       ingress-nginx-controller-bc57996ff-4g8tm
	99a46c34e172a       420193b27261a       5 minutes ago       Exited              patch                                    2                   171f0147fce79       ingress-nginx-admission-patch-2gbp9
	b29554720702c       ee6d597e62dc8       5 minutes ago       Running             csi-snapshotter                          0                   f30d3770f99ad       csi-hostpathplugin-fgt4c
	53d04a7a69e22       642ded511e141       5 minutes ago       Running             csi-provisioner                          0                   f30d3770f99ad       csi-hostpathplugin-fgt4c
	dc80b1c2edce2       922312104da8a       5 minutes ago       Running             liveness-probe                           0                   f30d3770f99ad       csi-hostpathplugin-fgt4c
	fd125d8df0448       08f6b2990811a       5 minutes ago       Running             hostpath                                 0                   f30d3770f99ad       csi-hostpathplugin-fgt4c
	17e20dc61840c       0107d56dbc0be       5 minutes ago       Running             node-driver-registrar                    0                   f30d3770f99ad       csi-hostpathplugin-fgt4c
	2677ed5b4ede7       9a80d518f102c       5 minutes ago       Running             csi-attacher                             0                   13e24dee82496       csi-hostpath-attacher-0
	5c5a2b82445b4       23cbb28ae641a       5 minutes ago       Running             volcano-controllers                      0                   d7ec8cf4424b0       volcano-controllers-789ffc5785-dnqbf
	8566352d0e86f       420193b27261a       5 minutes ago       Exited              create                                   0                   ea588893e3a00       ingress-nginx-admission-create-r8x7j
	e6680acebf2e7       6aa88c604f2b4       5 minutes ago       Running             volcano-scheduler                        0                   fa9b16a1e2ceb       volcano-scheduler-6c9778cbdf-dpq9k
	c164e6eef6d4f       4d1e5c3e97420       5 minutes ago       Running             volume-snapshot-controller               0                   a7297794ebd61       snapshot-controller-56fcc65765-sj6bv
	e0a41b3cdb4ae       1461903ec4fe9       5 minutes ago       Running             csi-external-health-monitor-controller   0                   f30d3770f99ad       csi-hostpathplugin-fgt4c
	98e6117f186d6       4d1e5c3e97420       5 minutes ago       Running             volume-snapshot-controller               0                   1a6f013bc2cdb       snapshot-controller-56fcc65765-bgvc8
	e58c356b33b21       f7ed138f698f6       5 minutes ago       Running             registry-proxy                           0                   45bb4cb23e919       registry-proxy-fs9v5
	9340d3f452d07       7ce2150c8929b       5 minutes ago       Running             local-path-provisioner                   0                   8f49d26a69501       local-path-provisioner-86d989889c-dwmbs
	c38245d691b40       77bdba588b953       5 minutes ago       Running             yakd                                     0                   5ce9866aacdae       yakd-dashboard-67d98fc6b-949jb
	ca2b5179aa7ee       5548a49bb60ba       5 minutes ago       Running             metrics-server                           0                   c8ee2e6bcd6a2       metrics-server-84c5f94fbc-lt7q4
	28bb60fe3b227       be9cac3585579       5 minutes ago       Running             cloud-spanner-emulator                   0                   d3dd0dffa6b37       cloud-spanner-emulator-5b584cc74-g8llr
	17936e96fad7d       a9bac31a5be8d       5 minutes ago       Running             nvidia-device-plugin-ctr                 0                   9b4d0d37a8c65       nvidia-device-plugin-daemonset-c95fk
	caea931026747       c9cf76bb104e1       5 minutes ago       Running             registry                                 0                   f8cd126424d0f       registry-66c9cd494c-c2dt2
	203e36c140c09       487fa743e1e22       5 minutes ago       Running             csi-resizer                              0                   a9c9be5e9e56d       csi-hostpath-resizer-0
	75f78125d8173       4f725bf50aaa5       5 minutes ago       Running             gadget                                   0                   eba0f007171a1       gadget-j46m9
	c90f3f9322989       2f6c962e7b831       5 minutes ago       Running             coredns                                  0                   8afd68aee1ee7       coredns-7c65d6cfc9-42kp9
	8b62c06e297ed       35508c2f890c4       5 minutes ago       Running             minikube-ingress-dns                     0                   249878b63f750       kube-ingress-dns-minikube
	90def97d4810c       ba04bb24b9575       5 minutes ago       Running             storage-provisioner                      0                   d6f1c53678349       storage-provisioner
	c9abb6d1e6f7d       6a23fa8fd2b78       6 minutes ago       Running             kindnet-cni                              0                   881a7b8f162d3       kindnet-69x4j
	08ca19d7761b2       24a140c548c07       6 minutes ago       Running             kube-proxy                               0                   c4bac41abc123       kube-proxy-nrcbk
	1bec7a8fa0964       7f8aa378bb47d       6 minutes ago       Running             kube-scheduler                           0                   0bdb3e316c798       kube-scheduler-addons-268164
	22fec4dae2ee5       d3f53a98c0a9d       6 minutes ago       Running             kube-apiserver                           0                   2bb52cabbfdaf       kube-apiserver-addons-268164
	d21103bdeb177       279f381cb3736       6 minutes ago       Running             kube-controller-manager                  0                   b9334e9a92c7a       kube-controller-manager-addons-268164
	567b5eae29660       27e3830e14027       6 minutes ago       Running             etcd                                     0                   0c57cfa23f8a9       etcd-addons-268164
	
	
	==> containerd <==
	Oct 07 11:57:04 addons-268164 containerd[812]: time="2024-10-07T11:57:04.695289193Z" level=info msg="TearDown network for sandbox \"b3d1d300efd911983fa5a382e80688d9561c3951697f9a0c74bf9d155662d917\" successfully"
	Oct 07 11:57:04 addons-268164 containerd[812]: time="2024-10-07T11:57:04.695330472Z" level=info msg="StopPodSandbox for \"b3d1d300efd911983fa5a382e80688d9561c3951697f9a0c74bf9d155662d917\" returns successfully"
	Oct 07 11:57:04 addons-268164 containerd[812]: time="2024-10-07T11:57:04.696108274Z" level=info msg="RemovePodSandbox for \"b3d1d300efd911983fa5a382e80688d9561c3951697f9a0c74bf9d155662d917\""
	Oct 07 11:57:04 addons-268164 containerd[812]: time="2024-10-07T11:57:04.696150160Z" level=info msg="Forcibly stopping sandbox \"b3d1d300efd911983fa5a382e80688d9561c3951697f9a0c74bf9d155662d917\""
	Oct 07 11:57:04 addons-268164 containerd[812]: time="2024-10-07T11:57:04.705476849Z" level=info msg="TearDown network for sandbox \"b3d1d300efd911983fa5a382e80688d9561c3951697f9a0c74bf9d155662d917\" successfully"
	Oct 07 11:57:04 addons-268164 containerd[812]: time="2024-10-07T11:57:04.712858635Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b3d1d300efd911983fa5a382e80688d9561c3951697f9a0c74bf9d155662d917\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus."
	Oct 07 11:57:04 addons-268164 containerd[812]: time="2024-10-07T11:57:04.713008399Z" level=info msg="RemovePodSandbox \"b3d1d300efd911983fa5a382e80688d9561c3951697f9a0c74bf9d155662d917\" returns successfully"
	Oct 07 11:57:04 addons-268164 containerd[812]: time="2024-10-07T11:57:04.713685855Z" level=info msg="StopPodSandbox for \"9250e63d4c27fb7685b049e309871c94f574714c91b1044bd50bdad6e888295b\""
	Oct 07 11:57:04 addons-268164 containerd[812]: time="2024-10-07T11:57:04.721646055Z" level=info msg="TearDown network for sandbox \"9250e63d4c27fb7685b049e309871c94f574714c91b1044bd50bdad6e888295b\" successfully"
	Oct 07 11:57:04 addons-268164 containerd[812]: time="2024-10-07T11:57:04.721842727Z" level=info msg="StopPodSandbox for \"9250e63d4c27fb7685b049e309871c94f574714c91b1044bd50bdad6e888295b\" returns successfully"
	Oct 07 11:57:04 addons-268164 containerd[812]: time="2024-10-07T11:57:04.722532170Z" level=info msg="RemovePodSandbox for \"9250e63d4c27fb7685b049e309871c94f574714c91b1044bd50bdad6e888295b\""
	Oct 07 11:57:04 addons-268164 containerd[812]: time="2024-10-07T11:57:04.722587332Z" level=info msg="Forcibly stopping sandbox \"9250e63d4c27fb7685b049e309871c94f574714c91b1044bd50bdad6e888295b\""
	Oct 07 11:57:04 addons-268164 containerd[812]: time="2024-10-07T11:57:04.730836042Z" level=info msg="TearDown network for sandbox \"9250e63d4c27fb7685b049e309871c94f574714c91b1044bd50bdad6e888295b\" successfully"
	Oct 07 11:57:04 addons-268164 containerd[812]: time="2024-10-07T11:57:04.737657210Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"9250e63d4c27fb7685b049e309871c94f574714c91b1044bd50bdad6e888295b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus."
	Oct 07 11:57:04 addons-268164 containerd[812]: time="2024-10-07T11:57:04.737923230Z" level=info msg="RemovePodSandbox \"9250e63d4c27fb7685b049e309871c94f574714c91b1044bd50bdad6e888295b\" returns successfully"
	Oct 07 11:58:04 addons-268164 containerd[812]: time="2024-10-07T11:58:04.743258913Z" level=info msg="RemoveContainer for \"e3f907c6547847cc7ad7ec8807ae7bb8d32dfa09ae8d3515d13e7b96f911096c\""
	Oct 07 11:58:04 addons-268164 containerd[812]: time="2024-10-07T11:58:04.750631534Z" level=info msg="RemoveContainer for \"e3f907c6547847cc7ad7ec8807ae7bb8d32dfa09ae8d3515d13e7b96f911096c\" returns successfully"
	Oct 07 11:58:04 addons-268164 containerd[812]: time="2024-10-07T11:58:04.752732658Z" level=info msg="StopPodSandbox for \"f58fa5c768ad7d84a4b09da19cb5eab3a5a9149de0294759670a1fa7f2f19c29\""
	Oct 07 11:58:04 addons-268164 containerd[812]: time="2024-10-07T11:58:04.760317976Z" level=info msg="TearDown network for sandbox \"f58fa5c768ad7d84a4b09da19cb5eab3a5a9149de0294759670a1fa7f2f19c29\" successfully"
	Oct 07 11:58:04 addons-268164 containerd[812]: time="2024-10-07T11:58:04.760488622Z" level=info msg="StopPodSandbox for \"f58fa5c768ad7d84a4b09da19cb5eab3a5a9149de0294759670a1fa7f2f19c29\" returns successfully"
	Oct 07 11:58:04 addons-268164 containerd[812]: time="2024-10-07T11:58:04.761057764Z" level=info msg="RemovePodSandbox for \"f58fa5c768ad7d84a4b09da19cb5eab3a5a9149de0294759670a1fa7f2f19c29\""
	Oct 07 11:58:04 addons-268164 containerd[812]: time="2024-10-07T11:58:04.761107322Z" level=info msg="Forcibly stopping sandbox \"f58fa5c768ad7d84a4b09da19cb5eab3a5a9149de0294759670a1fa7f2f19c29\""
	Oct 07 11:58:04 addons-268164 containerd[812]: time="2024-10-07T11:58:04.768814097Z" level=info msg="TearDown network for sandbox \"f58fa5c768ad7d84a4b09da19cb5eab3a5a9149de0294759670a1fa7f2f19c29\" successfully"
	Oct 07 11:58:04 addons-268164 containerd[812]: time="2024-10-07T11:58:04.777072948Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"f58fa5c768ad7d84a4b09da19cb5eab3a5a9149de0294759670a1fa7f2f19c29\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus."
	Oct 07 11:58:04 addons-268164 containerd[812]: time="2024-10-07T11:58:04.777341314Z" level=info msg="RemovePodSandbox \"f58fa5c768ad7d84a4b09da19cb5eab3a5a9149de0294759670a1fa7f2f19c29\" returns successfully"
	
	
	==> coredns [c90f3f9322989b7c6158ac89417a97ac9cce9da3124b04f40e2df83a601ec098] <==
	[INFO] 10.244.0.7:48525 - 39336 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.00008269s
	[INFO] 10.244.0.7:48525 - 44283 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 94 false 1232" NXDOMAIN qr,rd,ra 83 0.002298124s
	[INFO] 10.244.0.7:48525 - 20870 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 94 false 1232" NXDOMAIN qr,rd,ra 83 0.002650148s
	[INFO] 10.244.0.7:48525 - 15157 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000119915s
	[INFO] 10.244.0.7:48525 - 55158 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000078956s
	[INFO] 10.244.0.7:39023 - 59340 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000084462s
	[INFO] 10.244.0.7:39023 - 59537 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000042207s
	[INFO] 10.244.0.7:39548 - 26338 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000055334s
	[INFO] 10.244.0.7:39548 - 26502 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000037955s
	[INFO] 10.244.0.7:35768 - 40589 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000056154s
	[INFO] 10.244.0.7:35768 - 40778 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000069881s
	[INFO] 10.244.0.7:39022 - 19256 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001442539s
	[INFO] 10.244.0.7:39022 - 19677 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001595414s
	[INFO] 10.244.0.7:46643 - 57799 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000053365s
	[INFO] 10.244.0.7:46643 - 57947 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000050682s
	[INFO] 10.244.0.24:58321 - 29403 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000163245s
	[INFO] 10.244.0.24:37009 - 26896 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000189657s
	[INFO] 10.244.0.24:52635 - 30299 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000161891s
	[INFO] 10.244.0.24:44902 - 56806 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.00010796s
	[INFO] 10.244.0.24:44198 - 378 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000210661s
	[INFO] 10.244.0.24:34606 - 47554 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000150273s
	[INFO] 10.244.0.24:55326 - 33703 "A IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.003087118s
	[INFO] 10.244.0.24:59831 - 3833 "AAAA IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.003086412s
	[INFO] 10.244.0.24:56282 - 689 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.00258167s
	[INFO] 10.244.0.24:53954 - 47515 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 610 0.003512707s
	
	
	==> describe nodes <==
	Name:               addons-268164
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-268164
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=55a088b4b31722f6a33d4d5d4ae6e59a42bb414b
	                    minikube.k8s.io/name=addons-268164
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_10_07T11_54_05_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-268164
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-268164"}
	                    kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 07 Oct 2024 11:54:02 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-268164
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 07 Oct 2024 12:00:12 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 07 Oct 2024 11:57:08 +0000   Mon, 07 Oct 2024 11:53:59 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 07 Oct 2024 11:57:08 +0000   Mon, 07 Oct 2024 11:53:59 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 07 Oct 2024 11:57:08 +0000   Mon, 07 Oct 2024 11:53:59 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 07 Oct 2024 11:57:08 +0000   Mon, 07 Oct 2024 11:54:02 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-268164
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 b3b2f902af8a45fead9a77fc9088895c
	  System UUID:                bd0f796d-89c6-47fe-a788-3c4e8b0b6faa
	  Boot ID:                    aa802e8e-7a27-4e80-bbf6-ed0c45666ec2
	  Kernel Version:             5.15.0-1070-aws
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  containerd://1.7.22
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (27 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     cloud-spanner-emulator-5b584cc74-g8llr      0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m
	  gadget                      gadget-j46m9                                0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m56s
	  gcp-auth                    gcp-auth-89d5ffd79-cztkh                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m28s
	  ingress-nginx               ingress-nginx-controller-bc57996ff-4g8tm    100m (5%)     0 (0%)      90Mi (1%)        0 (0%)         5m55s
	  kube-system                 coredns-7c65d6cfc9-42kp9                    100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     6m3s
	  kube-system                 csi-hostpath-attacher-0                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m52s
	  kube-system                 csi-hostpath-resizer-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m52s
	  kube-system                 csi-hostpathplugin-fgt4c                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m52s
	  kube-system                 etcd-addons-268164                          100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         6m9s
	  kube-system                 kindnet-69x4j                               100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      6m4s
	  kube-system                 kube-apiserver-addons-268164                250m (12%)    0 (0%)      0 (0%)           0 (0%)         6m9s
	  kube-system                 kube-controller-manager-addons-268164       200m (10%)    0 (0%)      0 (0%)           0 (0%)         6m10s
	  kube-system                 kube-ingress-dns-minikube                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m
	  kube-system                 kube-proxy-nrcbk                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m4s
	  kube-system                 kube-scheduler-addons-268164                100m (5%)     0 (0%)      0 (0%)           0 (0%)         6m9s
	  kube-system                 metrics-server-84c5f94fbc-lt7q4             100m (5%)     0 (0%)      200Mi (2%)       0 (0%)         5m58s
	  kube-system                 nvidia-device-plugin-daemonset-c95fk        0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m1s
	  kube-system                 registry-66c9cd494c-c2dt2                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m59s
	  kube-system                 registry-proxy-fs9v5                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m59s
	  kube-system                 snapshot-controller-56fcc65765-bgvc8        0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m56s
	  kube-system                 snapshot-controller-56fcc65765-sj6bv        0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m56s
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m58s
	  local-path-storage          local-path-provisioner-86d989889c-dwmbs     0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m58s
	  volcano-system              volcano-admission-5874dfdd79-8sxc6          0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m54s
	  volcano-system              volcano-controllers-789ffc5785-dnqbf        0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m54s
	  volcano-system              volcano-scheduler-6c9778cbdf-dpq9k          0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m53s
	  yakd-dashboard              yakd-dashboard-67d98fc6b-949jb              0 (0%)        0 (0%)      128Mi (1%)       256Mi (3%)     5m57s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1050m (52%)  100m (5%)
	  memory             638Mi (8%)   476Mi (6%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	  hugepages-32Mi     0 (0%)       0 (0%)
	  hugepages-64Ki     0 (0%)       0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 6m2s                   kube-proxy       
	  Normal   NodeAllocatableEnforced  6m15s                  kubelet          Updated Node Allocatable limit across pods
	  Warning  CgroupV1                 6m15s                  kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeHasSufficientMemory  6m15s (x4 over 6m15s)  kubelet          Node addons-268164 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    6m15s (x4 over 6m15s)  kubelet          Node addons-268164 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     6m15s (x3 over 6m15s)  kubelet          Node addons-268164 status is now: NodeHasSufficientPID
	  Normal   Starting                 6m15s                  kubelet          Starting kubelet.
	  Normal   Starting                 6m9s                   kubelet          Starting kubelet.
	  Warning  CgroupV1                 6m9s                   kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeAllocatableEnforced  6m9s                   kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  6m9s                   kubelet          Node addons-268164 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    6m9s                   kubelet          Node addons-268164 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     6m9s                   kubelet          Node addons-268164 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           6m5s                   node-controller  Node addons-268164 event: Registered Node addons-268164 in Controller
	
	
	==> dmesg <==
	
	
	==> etcd [567b5eae2966001f80b5f9df352674f09a49b9a751c4c82c174a9c4ea59fdc11] <==
	{"level":"info","ts":"2024-10-07T11:53:58.997424Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"aec36adc501070cc","initial-advertise-peer-urls":["https://192.168.49.2:2380"],"listen-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.49.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-10-07T11:53:58.997444Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-10-07T11:53:58.997574Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2024-10-07T11:53:58.997587Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2024-10-07T11:53:59.575562Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc is starting a new election at term 1"}
	{"level":"info","ts":"2024-10-07T11:53:59.575782Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 1"}
	{"level":"info","ts":"2024-10-07T11:53:59.575908Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 1"}
	{"level":"info","ts":"2024-10-07T11:53:59.575996Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 2"}
	{"level":"info","ts":"2024-10-07T11:53:59.576081Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2"}
	{"level":"info","ts":"2024-10-07T11:53:59.576149Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 2"}
	{"level":"info","ts":"2024-10-07T11:53:59.576227Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2"}
	{"level":"info","ts":"2024-10-07T11:53:59.577536Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:addons-268164 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2024-10-07T11:53:59.577686Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-10-07T11:53:59.577786Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-07T11:53:59.577812Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-10-07T11:53:59.579666Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-10-07T11:53:59.591783Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-10-07T11:53:59.580434Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-10-07T11:53:59.591994Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-07T11:53:59.592185Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-07T11:53:59.592278Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-07T11:53:59.592971Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-10-07T11:53:59.596393Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-10-07T11:53:59.592990Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	2024/10/07 11:54:04 WARNING: [core] [Server #8] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	
	
	==> gcp-auth [64df20c5f5b41c1c2d948c1cb4c49312f5a874d478918c78b5e80892c2f7ee14] <==
	2024/10/07 11:56:52 GCP Auth Webhook started!
	2024/10/07 11:57:11 Ready to marshal response ...
	2024/10/07 11:57:11 Ready to write response ...
	2024/10/07 11:57:12 Ready to marshal response ...
	2024/10/07 11:57:12 Ready to write response ...
	
	
	==> kernel <==
	 12:00:14 up 1 day,  1:42,  0 users,  load average: 0.65, 1.87, 2.15
	Linux addons-268164 5.15.0-1070-aws #76~20.04.1-Ubuntu SMP Mon Sep 2 12:20:48 UTC 2024 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kindnet [c9abb6d1e6f7d5b1ced492462206fd5322e9117acc9b43e773e3a569ffe52e9a] <==
	I1007 11:58:11.596959       1 main.go:299] handling current node
	I1007 11:58:21.602264       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I1007 11:58:21.602297       1 main.go:299] handling current node
	I1007 11:58:31.602471       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I1007 11:58:31.602510       1 main.go:299] handling current node
	I1007 11:58:41.603609       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I1007 11:58:41.603648       1 main.go:299] handling current node
	I1007 11:58:51.603609       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I1007 11:58:51.603643       1 main.go:299] handling current node
	I1007 11:59:01.602962       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I1007 11:59:01.603018       1 main.go:299] handling current node
	I1007 11:59:11.596415       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I1007 11:59:11.596448       1 main.go:299] handling current node
	I1007 11:59:21.603636       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I1007 11:59:21.603671       1 main.go:299] handling current node
	I1007 11:59:31.602561       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I1007 11:59:31.602608       1 main.go:299] handling current node
	I1007 11:59:41.596152       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I1007 11:59:41.596428       1 main.go:299] handling current node
	I1007 11:59:51.596411       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I1007 11:59:51.596448       1 main.go:299] handling current node
	I1007 12:00:01.598775       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I1007 12:00:01.598903       1 main.go:299] handling current node
	I1007 12:00:11.595923       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I1007 12:00:11.595970       1 main.go:299] handling current node
	
	
	==> kube-apiserver [22fec4dae2ee5f01164972a380e0d8071f67ee5eebb1a517bc032367a8f41552] <==
	W1007 11:55:23.261780       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.98.120.168:443: connect: connection refused
	W1007 11:55:24.336113       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.98.120.168:443: connect: connection refused
	W1007 11:55:25.387725       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.98.120.168:443: connect: connection refused
	W1007 11:55:25.629742       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.103.222.67:443: connect: connection refused
	E1007 11:55:25.629782       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.103.222.67:443: connect: connection refused" logger="UnhandledError"
	W1007 11:55:25.631584       1 dispatcher.go:225] Failed calling webhook, failing closed mutatepod.volcano.sh: failed calling webhook "mutatepod.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/pods/mutate?timeout=10s": dial tcp 10.98.120.168:443: connect: connection refused
	W1007 11:55:25.688194       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.103.222.67:443: connect: connection refused
	E1007 11:55:25.688238       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.103.222.67:443: connect: connection refused" logger="UnhandledError"
	W1007 11:55:25.689910       1 dispatcher.go:225] Failed calling webhook, failing closed mutatepod.volcano.sh: failed calling webhook "mutatepod.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/pods/mutate?timeout=10s": dial tcp 10.98.120.168:443: connect: connection refused
	W1007 11:55:26.474989       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.98.120.168:443: connect: connection refused
	W1007 11:55:27.494392       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.98.120.168:443: connect: connection refused
	W1007 11:55:28.584262       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.98.120.168:443: connect: connection refused
	W1007 11:55:29.619208       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.98.120.168:443: connect: connection refused
	W1007 11:55:30.696219       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.98.120.168:443: connect: connection refused
	W1007 11:55:31.731130       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.98.120.168:443: connect: connection refused
	W1007 11:55:32.785044       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.98.120.168:443: connect: connection refused
	W1007 11:55:33.877535       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.98.120.168:443: connect: connection refused
	W1007 11:55:45.614583       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.103.222.67:443: connect: connection refused
	E1007 11:55:45.614623       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.103.222.67:443: connect: connection refused" logger="UnhandledError"
	W1007 11:56:25.640270       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.103.222.67:443: connect: connection refused
	E1007 11:56:25.640312       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.103.222.67:443: connect: connection refused" logger="UnhandledError"
	W1007 11:56:25.696967       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.103.222.67:443: connect: connection refused
	E1007 11:56:25.697011       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.103.222.67:443: connect: connection refused" logger="UnhandledError"
	I1007 11:57:11.652041       1 controller.go:615] quota admission added evaluator for: jobs.batch.volcano.sh
	I1007 11:57:11.690180       1 controller.go:615] quota admission added evaluator for: podgroups.scheduling.volcano.sh
	
	
	==> kube-controller-manager [d21103bdeb1771896f35446af0d94030546c0b77fe5abf701734d4b285351a95] <==
	I1007 11:56:25.662903       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="1s"
	I1007 11:56:25.667311       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="1s"
	I1007 11:56:25.680351       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="1s"
	I1007 11:56:25.705928       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I1007 11:56:25.713079       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I1007 11:56:25.724949       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I1007 11:56:25.731660       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I1007 11:56:26.461761       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I1007 11:56:26.476434       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="1s"
	I1007 11:56:27.595232       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="1s"
	I1007 11:56:27.614699       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I1007 11:56:28.600022       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="1s"
	I1007 11:56:28.611237       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="1s"
	I1007 11:56:28.613898       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="1s"
	I1007 11:56:28.622060       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I1007 11:56:28.629653       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I1007 11:56:28.638566       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I1007 11:56:53.559388       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="gcp-auth/gcp-auth-89d5ffd79" duration="15.5094ms"
	I1007 11:56:53.559659       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="gcp-auth/gcp-auth-89d5ffd79" duration="47.843µs"
	I1007 11:56:58.022705       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="0s"
	I1007 11:56:58.028907       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="0s"
	I1007 11:56:58.066830       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="0s"
	I1007 11:56:58.067427       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="0s"
	I1007 11:57:08.396888       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="addons-268164"
	I1007 11:57:11.346385       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="volcano-system/volcano-admission-init" delay="0s"
	
	
	==> kube-proxy [08ca19d7761b28fdf33e28ee315ef9b8a73bc93e00d399eac9aac1375a3c77c8] <==
	I1007 11:54:10.956291       1 server_linux.go:66] "Using iptables proxy"
	I1007 11:54:11.108004       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	E1007 11:54:11.108092       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1007 11:54:11.160538       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1007 11:54:11.160636       1 server_linux.go:169] "Using iptables Proxier"
	I1007 11:54:11.164081       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1007 11:54:11.164423       1 server.go:483] "Version info" version="v1.31.1"
	I1007 11:54:11.164437       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1007 11:54:11.172844       1 config.go:199] "Starting service config controller"
	I1007 11:54:11.172909       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1007 11:54:11.172949       1 config.go:105] "Starting endpoint slice config controller"
	I1007 11:54:11.172954       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1007 11:54:11.175855       1 config.go:328] "Starting node config controller"
	I1007 11:54:11.175870       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1007 11:54:11.273956       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1007 11:54:11.274014       1 shared_informer.go:320] Caches are synced for service config
	I1007 11:54:11.276432       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [1bec7a8fa0964af6903ca5843407f3980e22b935c47b2b48a941ca87a4fac720] <==
	W1007 11:54:03.200510       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1007 11:54:03.200729       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1007 11:54:03.200956       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1007 11:54:03.200981       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1007 11:54:03.201075       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1007 11:54:03.201097       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1007 11:54:03.201196       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1007 11:54:03.201221       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1007 11:54:03.205615       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1007 11:54:03.205666       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W1007 11:54:03.205842       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1007 11:54:03.205863       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1007 11:54:03.205917       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1007 11:54:03.205935       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1007 11:54:03.206005       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1007 11:54:03.206024       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1007 11:54:03.206088       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1007 11:54:03.206105       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1007 11:54:03.206153       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1007 11:54:03.206173       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1007 11:54:03.206222       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1007 11:54:03.206248       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1007 11:54:03.206477       1 reflector.go:561] runtime/asm_arm64.s:1222: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1007 11:54:03.206506       1 reflector.go:158] "Unhandled Error" err="runtime/asm_arm64.s:1222: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I1007 11:54:04.485705       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Oct 07 11:56:25 addons-268164 kubelet[1474]: I1007 11:56:25.796073    1474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qjj64\" (UniqueName: \"kubernetes.io/projected/1e771da9-cba1-4343-a5b9-afa280f5c765-kube-api-access-qjj64\") pod \"gcp-auth-certs-patch-v9lvs\" (UID: \"1e771da9-cba1-4343-a5b9-afa280f5c765\") " pod="gcp-auth/gcp-auth-certs-patch-v9lvs"
	Oct 07 11:56:25 addons-268164 kubelet[1474]: I1007 11:56:25.796165    1474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dq8db\" (UniqueName: \"kubernetes.io/projected/869e40e5-6c86-427b-81ba-a3a72309138d-kube-api-access-dq8db\") pod \"gcp-auth-certs-create-22rvt\" (UID: \"869e40e5-6c86-427b-81ba-a3a72309138d\") " pod="gcp-auth/gcp-auth-certs-create-22rvt"
	Oct 07 11:56:27 addons-268164 kubelet[1474]: I1007 11:56:27.711965    1474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qjj64\" (UniqueName: \"kubernetes.io/projected/1e771da9-cba1-4343-a5b9-afa280f5c765-kube-api-access-qjj64\") pod \"1e771da9-cba1-4343-a5b9-afa280f5c765\" (UID: \"1e771da9-cba1-4343-a5b9-afa280f5c765\") "
	Oct 07 11:56:27 addons-268164 kubelet[1474]: I1007 11:56:27.712534    1474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dq8db\" (UniqueName: \"kubernetes.io/projected/869e40e5-6c86-427b-81ba-a3a72309138d-kube-api-access-dq8db\") pod \"869e40e5-6c86-427b-81ba-a3a72309138d\" (UID: \"869e40e5-6c86-427b-81ba-a3a72309138d\") "
	Oct 07 11:56:27 addons-268164 kubelet[1474]: I1007 11:56:27.714222    1474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1e771da9-cba1-4343-a5b9-afa280f5c765-kube-api-access-qjj64" (OuterVolumeSpecName: "kube-api-access-qjj64") pod "1e771da9-cba1-4343-a5b9-afa280f5c765" (UID: "1e771da9-cba1-4343-a5b9-afa280f5c765"). InnerVolumeSpecName "kube-api-access-qjj64". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Oct 07 11:56:27 addons-268164 kubelet[1474]: I1007 11:56:27.714431    1474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/869e40e5-6c86-427b-81ba-a3a72309138d-kube-api-access-dq8db" (OuterVolumeSpecName: "kube-api-access-dq8db") pod "869e40e5-6c86-427b-81ba-a3a72309138d" (UID: "869e40e5-6c86-427b-81ba-a3a72309138d"). InnerVolumeSpecName "kube-api-access-dq8db". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Oct 07 11:56:27 addons-268164 kubelet[1474]: I1007 11:56:27.813341    1474 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-dq8db\" (UniqueName: \"kubernetes.io/projected/869e40e5-6c86-427b-81ba-a3a72309138d-kube-api-access-dq8db\") on node \"addons-268164\" DevicePath \"\""
	Oct 07 11:56:27 addons-268164 kubelet[1474]: I1007 11:56:27.813384    1474 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-qjj64\" (UniqueName: \"kubernetes.io/projected/1e771da9-cba1-4343-a5b9-afa280f5c765-kube-api-access-qjj64\") on node \"addons-268164\" DevicePath \"\""
	Oct 07 11:56:28 addons-268164 kubelet[1474]: I1007 11:56:28.459671    1474 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9250e63d4c27fb7685b049e309871c94f574714c91b1044bd50bdad6e888295b"
	Oct 07 11:56:28 addons-268164 kubelet[1474]: I1007 11:56:28.464754    1474 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b3d1d300efd911983fa5a382e80688d9561c3951697f9a0c74bf9d155662d917"
	Oct 07 11:56:53 addons-268164 kubelet[1474]: I1007 11:56:53.544582    1474 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="gcp-auth/gcp-auth-89d5ffd79-cztkh" podStartSLOduration=65.661271613 podStartE2EDuration="1m8.544562349s" podCreationTimestamp="2024-10-07 11:55:45 +0000 UTC" firstStartedPulling="2024-10-07 11:56:49.952608113 +0000 UTC m=+165.458792875" lastFinishedPulling="2024-10-07 11:56:52.835898849 +0000 UTC m=+168.342083611" observedRunningTime="2024-10-07 11:56:53.544054874 +0000 UTC m=+169.050239677" watchObservedRunningTime="2024-10-07 11:56:53.544562349 +0000 UTC m=+169.050747110"
	Oct 07 11:56:58 addons-268164 kubelet[1474]: I1007 11:56:58.638360    1474 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1e771da9-cba1-4343-a5b9-afa280f5c765" path="/var/lib/kubelet/pods/1e771da9-cba1-4343-a5b9-afa280f5c765/volumes"
	Oct 07 11:56:58 addons-268164 kubelet[1474]: I1007 11:56:58.639616    1474 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="869e40e5-6c86-427b-81ba-a3a72309138d" path="/var/lib/kubelet/pods/869e40e5-6c86-427b-81ba-a3a72309138d/volumes"
	Oct 07 11:57:04 addons-268164 kubelet[1474]: I1007 11:57:04.669091    1474 scope.go:117] "RemoveContainer" containerID="cda3b9c31faa4fc73d43e3f8335972349cdf1e47ce0d3aef51f2507b3d8b5e79"
	Oct 07 11:57:04 addons-268164 kubelet[1474]: I1007 11:57:04.677622    1474 scope.go:117] "RemoveContainer" containerID="d86c7ed31c9d5992a537ddd200567dc6af589a6bab6c96c411c00614c3f62ee3"
	Oct 07 11:57:12 addons-268164 kubelet[1474]: I1007 11:57:12.639195    1474 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="32216e9e-7001-4a75-818e-0a091c3fe9e0" path="/var/lib/kubelet/pods/32216e9e-7001-4a75-818e-0a091c3fe9e0/volumes"
	Oct 07 11:57:16 addons-268164 kubelet[1474]: I1007 11:57:16.635359    1474 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/nvidia-device-plugin-daemonset-c95fk" secret="" err="secret \"gcp-auth\" not found"
	Oct 07 11:57:16 addons-268164 kubelet[1474]: I1007 11:57:16.638346    1474 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-66c9cd494c-c2dt2" secret="" err="secret \"gcp-auth\" not found"
	Oct 07 11:57:29 addons-268164 kubelet[1474]: I1007 11:57:29.635242    1474 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-fs9v5" secret="" err="secret \"gcp-auth\" not found"
	Oct 07 11:58:04 addons-268164 kubelet[1474]: I1007 11:58:04.741540    1474 scope.go:117] "RemoveContainer" containerID="e3f907c6547847cc7ad7ec8807ae7bb8d32dfa09ae8d3515d13e7b96f911096c"
	Oct 07 11:58:40 addons-268164 kubelet[1474]: I1007 11:58:40.634726    1474 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-66c9cd494c-c2dt2" secret="" err="secret \"gcp-auth\" not found"
	Oct 07 11:58:42 addons-268164 kubelet[1474]: I1007 11:58:42.635081    1474 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/nvidia-device-plugin-daemonset-c95fk" secret="" err="secret \"gcp-auth\" not found"
	Oct 07 11:58:58 addons-268164 kubelet[1474]: I1007 11:58:58.636149    1474 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-fs9v5" secret="" err="secret \"gcp-auth\" not found"
	Oct 07 11:59:43 addons-268164 kubelet[1474]: I1007 11:59:43.635186    1474 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-66c9cd494c-c2dt2" secret="" err="secret \"gcp-auth\" not found"
	Oct 07 11:59:49 addons-268164 kubelet[1474]: I1007 11:59:49.635699    1474 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/nvidia-device-plugin-daemonset-c95fk" secret="" err="secret \"gcp-auth\" not found"
	
	
	==> storage-provisioner [90def97d4810c7ed196c6bc65f8ebfc2dd458a4182f6deda99b3da50a36951bc] <==
	I1007 11:54:16.084119       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1007 11:54:16.117064       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1007 11:54:16.117130       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1007 11:54:16.144813       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1007 11:54:16.144986       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-268164_aede5def-8a18-40a2-ab60-7ac590c6a169!
	I1007 11:54:16.145866       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"df3cabea-d9df-47c0-b728-d2ef1e2187c8", APIVersion:"v1", ResourceVersion:"576", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-268164_aede5def-8a18-40a2-ab60-7ac590c6a169 became leader
	I1007 11:54:16.245740       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-268164_aede5def-8a18-40a2-ab60-7ac590c6a169!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-268164 -n addons-268164
helpers_test.go:261: (dbg) Run:  kubectl --context addons-268164 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: ingress-nginx-admission-create-r8x7j ingress-nginx-admission-patch-2gbp9 test-job-nginx-0
helpers_test.go:274: ======> post-mortem[TestAddons/serial/Volcano]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-268164 describe pod ingress-nginx-admission-create-r8x7j ingress-nginx-admission-patch-2gbp9 test-job-nginx-0
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-268164 describe pod ingress-nginx-admission-create-r8x7j ingress-nginx-admission-patch-2gbp9 test-job-nginx-0: exit status 1 (116.457331ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-r8x7j" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-2gbp9" not found
	Error from server (NotFound): pods "test-job-nginx-0" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context addons-268164 describe pod ingress-nginx-admission-create-r8x7j ingress-nginx-admission-patch-2gbp9 test-job-nginx-0: exit status 1
addons_test.go:979: (dbg) Run:  out/minikube-linux-arm64 -p addons-268164 addons disable volcano --alsologtostderr -v=1
addons_test.go:979: (dbg) Done: out/minikube-linux-arm64 -p addons-268164 addons disable volcano --alsologtostderr -v=1: (11.263388549s)
--- FAIL: TestAddons/serial/Volcano (212.31s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (375.9s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-130031 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.20.0
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p old-k8s-version-130031 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.20.0: exit status 102 (6m12.513171781s)

                                                
                                                
-- stdout --
	* [old-k8s-version-130031] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19763
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19763-1394934/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19763-1394934/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	* Using the docker driver based on existing profile
	* Starting "old-k8s-version-130031" primary control-plane node in "old-k8s-version-130031" cluster
	* Pulling base image v0.0.45-1727731891-master ...
	* Restarting existing docker container for "old-k8s-version-130031" ...
	* Preparing Kubernetes v1.20.0 on containerd 1.7.22 ...
	* Verifying Kubernetes components...
	  - Using image fake.domain/registry.k8s.io/echoserver:1.4
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	  - Using image registry.k8s.io/echoserver:1.4
	  - Using image docker.io/kubernetesui/dashboard:v2.7.0
	* Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p old-k8s-version-130031 addons enable metrics-server
	
	* Enabled addons: metrics-server, default-storageclass, storage-provisioner, dashboard
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1007 12:42:44.876376 1605045 out.go:345] Setting OutFile to fd 1 ...
	I1007 12:42:44.876504 1605045 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 12:42:44.876514 1605045 out.go:358] Setting ErrFile to fd 2...
	I1007 12:42:44.876519 1605045 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 12:42:44.876775 1605045 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19763-1394934/.minikube/bin
	I1007 12:42:44.877194 1605045 out.go:352] Setting JSON to false
	I1007 12:42:44.878184 1605045 start.go:129] hostinfo: {"hostname":"ip-172-31-29-130","uptime":95116,"bootTime":1728209849,"procs":208,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I1007 12:42:44.878260 1605045 start.go:139] virtualization:  
	I1007 12:42:44.881391 1605045 out.go:177] * [old-k8s-version-130031] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	I1007 12:42:44.884810 1605045 out.go:177]   - MINIKUBE_LOCATION=19763
	I1007 12:42:44.884884 1605045 notify.go:220] Checking for updates...
	I1007 12:42:44.890256 1605045 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1007 12:42:44.892976 1605045 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19763-1394934/kubeconfig
	I1007 12:42:44.895935 1605045 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19763-1394934/.minikube
	I1007 12:42:44.898573 1605045 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1007 12:42:44.901142 1605045 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1007 12:42:44.904400 1605045 config.go:182] Loaded profile config "old-k8s-version-130031": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
	I1007 12:42:44.907519 1605045 out.go:177] * Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	I1007 12:42:44.910079 1605045 driver.go:394] Setting default libvirt URI to qemu:///system
	I1007 12:42:44.945878 1605045 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I1007 12:42:44.946050 1605045 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1007 12:42:45.003099 1605045 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:39 OomKillDisable:true NGoroutines:51 SystemTime:2024-10-07 12:42:44.991436534 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I1007 12:42:45.003237 1605045 docker.go:318] overlay module found
	I1007 12:42:45.006636 1605045 out.go:177] * Using the docker driver based on existing profile
	I1007 12:42:45.020040 1605045 start.go:297] selected driver: docker
	I1007 12:42:45.020089 1605045 start.go:901] validating driver "docker" against &{Name:old-k8s-version-130031 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-130031 Namespace:default APIServerHAVIP: A
PIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1007 12:42:45.020204 1605045 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1007 12:42:45.020969 1605045 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1007 12:42:45.096705 1605045 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:39 OomKillDisable:true NGoroutines:51 SystemTime:2024-10-07 12:42:45.076416558 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I1007 12:42:45.097162 1605045 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1007 12:42:45.097196 1605045 cni.go:84] Creating CNI manager for ""
	I1007 12:42:45.097242 1605045 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1007 12:42:45.097303 1605045 start.go:340] cluster config:
	{Name:old-k8s-version-130031 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-130031 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.loca
l ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:d
ocker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1007 12:42:45.100348 1605045 out.go:177] * Starting "old-k8s-version-130031" primary control-plane node in "old-k8s-version-130031" cluster
	I1007 12:42:45.103399 1605045 cache.go:121] Beginning downloading kic base image for docker with containerd
	I1007 12:42:45.107082 1605045 out.go:177] * Pulling base image v0.0.45-1727731891-master ...
	I1007 12:42:45.110493 1605045 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 in local docker daemon
	I1007 12:42:45.110527 1605045 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime containerd
	I1007 12:42:45.110686 1605045 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19763-1394934/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4
	I1007 12:42:45.110714 1605045 cache.go:56] Caching tarball of preloaded images
	I1007 12:42:45.110819 1605045 preload.go:172] Found /home/jenkins/minikube-integration/19763-1394934/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1007 12:42:45.110856 1605045 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on containerd
	I1007 12:42:45.111111 1605045 profile.go:143] Saving config to /home/jenkins/minikube-integration/19763-1394934/.minikube/profiles/old-k8s-version-130031/config.json ...
	I1007 12:42:45.139070 1605045 image.go:98] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 in local docker daemon, skipping pull
	I1007 12:42:45.139097 1605045 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 exists in daemon, skipping load
	I1007 12:42:45.139119 1605045 cache.go:194] Successfully downloaded all kic artifacts
	I1007 12:42:45.139192 1605045 start.go:360] acquireMachinesLock for old-k8s-version-130031: {Name:mka8ff2a7c40580b0778c81608c583f7e322b759 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1007 12:42:45.139282 1605045 start.go:364] duration metric: took 58.395µs to acquireMachinesLock for "old-k8s-version-130031"
	I1007 12:42:45.139313 1605045 start.go:96] Skipping create...Using existing machine configuration
	I1007 12:42:45.139329 1605045 fix.go:54] fixHost starting: 
	I1007 12:42:45.139690 1605045 cli_runner.go:164] Run: docker container inspect old-k8s-version-130031 --format={{.State.Status}}
	I1007 12:42:45.161330 1605045 fix.go:112] recreateIfNeeded on old-k8s-version-130031: state=Stopped err=<nil>
	W1007 12:42:45.161379 1605045 fix.go:138] unexpected machine state, will restart: <nil>
	I1007 12:42:45.164687 1605045 out.go:177] * Restarting existing docker container for "old-k8s-version-130031" ...
	I1007 12:42:45.167633 1605045 cli_runner.go:164] Run: docker start old-k8s-version-130031
	I1007 12:42:45.535826 1605045 cli_runner.go:164] Run: docker container inspect old-k8s-version-130031 --format={{.State.Status}}
	I1007 12:42:45.556874 1605045 kic.go:430] container "old-k8s-version-130031" state is running.
	I1007 12:42:45.557281 1605045 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-130031
	I1007 12:42:45.582671 1605045 profile.go:143] Saving config to /home/jenkins/minikube-integration/19763-1394934/.minikube/profiles/old-k8s-version-130031/config.json ...
	I1007 12:42:45.582891 1605045 machine.go:93] provisionDockerMachine start ...
	I1007 12:42:45.582945 1605045 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-130031
	I1007 12:42:45.606315 1605045 main.go:141] libmachine: Using SSH client type: native
	I1007 12:42:45.606578 1605045 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413c00] 0x416440 <nil>  [] 0s} 127.0.0.1 38186 <nil> <nil>}
	I1007 12:42:45.606587 1605045 main.go:141] libmachine: About to run SSH command:
	hostname
	I1007 12:42:45.607398 1605045 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1007 12:42:48.747171 1605045 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-130031
	
	I1007 12:42:48.747197 1605045 ubuntu.go:169] provisioning hostname "old-k8s-version-130031"
	I1007 12:42:48.747260 1605045 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-130031
	I1007 12:42:48.769318 1605045 main.go:141] libmachine: Using SSH client type: native
	I1007 12:42:48.769620 1605045 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413c00] 0x416440 <nil>  [] 0s} 127.0.0.1 38186 <nil> <nil>}
	I1007 12:42:48.769638 1605045 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-130031 && echo "old-k8s-version-130031" | sudo tee /etc/hostname
	I1007 12:42:48.919433 1605045 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-130031
	
	I1007 12:42:48.919525 1605045 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-130031
	I1007 12:42:48.943751 1605045 main.go:141] libmachine: Using SSH client type: native
	I1007 12:42:48.944015 1605045 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413c00] 0x416440 <nil>  [] 0s} 127.0.0.1 38186 <nil> <nil>}
	I1007 12:42:48.944037 1605045 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-130031' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-130031/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-130031' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1007 12:42:49.083886 1605045 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1007 12:42:49.083911 1605045 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19763-1394934/.minikube CaCertPath:/home/jenkins/minikube-integration/19763-1394934/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19763-1394934/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19763-1394934/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19763-1394934/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19763-1394934/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19763-1394934/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19763-1394934/.minikube}
	I1007 12:42:49.083951 1605045 ubuntu.go:177] setting up certificates
	I1007 12:42:49.083962 1605045 provision.go:84] configureAuth start
	I1007 12:42:49.084040 1605045 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-130031
	I1007 12:42:49.102927 1605045 provision.go:143] copyHostCerts
	I1007 12:42:49.103003 1605045 exec_runner.go:144] found /home/jenkins/minikube-integration/19763-1394934/.minikube/ca.pem, removing ...
	I1007 12:42:49.103017 1605045 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19763-1394934/.minikube/ca.pem
	I1007 12:42:49.103099 1605045 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19763-1394934/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19763-1394934/.minikube/ca.pem (1078 bytes)
	I1007 12:42:49.103204 1605045 exec_runner.go:144] found /home/jenkins/minikube-integration/19763-1394934/.minikube/cert.pem, removing ...
	I1007 12:42:49.103215 1605045 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19763-1394934/.minikube/cert.pem
	I1007 12:42:49.103244 1605045 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19763-1394934/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19763-1394934/.minikube/cert.pem (1123 bytes)
	I1007 12:42:49.103301 1605045 exec_runner.go:144] found /home/jenkins/minikube-integration/19763-1394934/.minikube/key.pem, removing ...
	I1007 12:42:49.103309 1605045 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19763-1394934/.minikube/key.pem
	I1007 12:42:49.103334 1605045 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19763-1394934/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19763-1394934/.minikube/key.pem (1675 bytes)
	I1007 12:42:49.103386 1605045 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19763-1394934/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19763-1394934/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19763-1394934/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-130031 san=[127.0.0.1 192.168.85.2 localhost minikube old-k8s-version-130031]
	I1007 12:42:49.451866 1605045 provision.go:177] copyRemoteCerts
	I1007 12:42:49.451946 1605045 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1007 12:42:49.451996 1605045 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-130031
	I1007 12:42:49.473840 1605045 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38186 SSHKeyPath:/home/jenkins/minikube-integration/19763-1394934/.minikube/machines/old-k8s-version-130031/id_rsa Username:docker}
	I1007 12:42:49.577726 1605045 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-1394934/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1007 12:42:49.602498 1605045 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-1394934/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1007 12:42:49.627706 1605045 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-1394934/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1007 12:42:49.652462 1605045 provision.go:87] duration metric: took 568.48122ms to configureAuth
	I1007 12:42:49.652542 1605045 ubuntu.go:193] setting minikube options for container-runtime
	I1007 12:42:49.652750 1605045 config.go:182] Loaded profile config "old-k8s-version-130031": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
	I1007 12:42:49.652767 1605045 machine.go:96] duration metric: took 4.069868218s to provisionDockerMachine
	I1007 12:42:49.652777 1605045 start.go:293] postStartSetup for "old-k8s-version-130031" (driver="docker")
	I1007 12:42:49.652804 1605045 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1007 12:42:49.652856 1605045 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1007 12:42:49.652907 1605045 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-130031
	I1007 12:42:49.670149 1605045 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38186 SSHKeyPath:/home/jenkins/minikube-integration/19763-1394934/.minikube/machines/old-k8s-version-130031/id_rsa Username:docker}
	I1007 12:42:49.764968 1605045 ssh_runner.go:195] Run: cat /etc/os-release
	I1007 12:42:49.768295 1605045 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1007 12:42:49.768331 1605045 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1007 12:42:49.768342 1605045 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1007 12:42:49.768351 1605045 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I1007 12:42:49.768362 1605045 filesync.go:126] Scanning /home/jenkins/minikube-integration/19763-1394934/.minikube/addons for local assets ...
	I1007 12:42:49.768421 1605045 filesync.go:126] Scanning /home/jenkins/minikube-integration/19763-1394934/.minikube/files for local assets ...
	I1007 12:42:49.768509 1605045 filesync.go:149] local asset: /home/jenkins/minikube-integration/19763-1394934/.minikube/files/etc/ssl/certs/14003082.pem -> 14003082.pem in /etc/ssl/certs
	I1007 12:42:49.768616 1605045 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1007 12:42:49.777424 1605045 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-1394934/.minikube/files/etc/ssl/certs/14003082.pem --> /etc/ssl/certs/14003082.pem (1708 bytes)
	I1007 12:42:49.802293 1605045 start.go:296] duration metric: took 149.483455ms for postStartSetup
	I1007 12:42:49.802373 1605045 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1007 12:42:49.802436 1605045 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-130031
	I1007 12:42:49.820236 1605045 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38186 SSHKeyPath:/home/jenkins/minikube-integration/19763-1394934/.minikube/machines/old-k8s-version-130031/id_rsa Username:docker}
	I1007 12:42:49.916273 1605045 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1007 12:42:49.921191 1605045 fix.go:56] duration metric: took 4.781864734s for fixHost
	I1007 12:42:49.921219 1605045 start.go:83] releasing machines lock for "old-k8s-version-130031", held for 4.78191969s
	I1007 12:42:49.921320 1605045 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-130031
	I1007 12:42:49.938751 1605045 ssh_runner.go:195] Run: cat /version.json
	I1007 12:42:49.938808 1605045 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-130031
	I1007 12:42:49.939129 1605045 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1007 12:42:49.939203 1605045 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-130031
	I1007 12:42:49.954803 1605045 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38186 SSHKeyPath:/home/jenkins/minikube-integration/19763-1394934/.minikube/machines/old-k8s-version-130031/id_rsa Username:docker}
	I1007 12:42:49.973289 1605045 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38186 SSHKeyPath:/home/jenkins/minikube-integration/19763-1394934/.minikube/machines/old-k8s-version-130031/id_rsa Username:docker}
	I1007 12:42:50.051604 1605045 ssh_runner.go:195] Run: systemctl --version
	I1007 12:42:50.195396 1605045 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1007 12:42:50.200777 1605045 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I1007 12:42:50.220184 1605045 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I1007 12:42:50.220273 1605045 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1007 12:42:50.229890 1605045 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1007 12:42:50.229916 1605045 start.go:495] detecting cgroup driver to use...
	I1007 12:42:50.229951 1605045 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1007 12:42:50.230016 1605045 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1007 12:42:50.245135 1605045 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1007 12:42:50.258690 1605045 docker.go:217] disabling cri-docker service (if available) ...
	I1007 12:42:50.258760 1605045 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1007 12:42:50.272793 1605045 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1007 12:42:50.286735 1605045 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1007 12:42:50.391306 1605045 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1007 12:42:50.481991 1605045 docker.go:233] disabling docker service ...
	I1007 12:42:50.482072 1605045 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1007 12:42:50.495076 1605045 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1007 12:42:50.506918 1605045 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1007 12:42:50.596919 1605045 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1007 12:42:50.694534 1605045 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1007 12:42:50.705841 1605045 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1007 12:42:50.723374 1605045 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.2"|' /etc/containerd/config.toml"
	I1007 12:42:50.733972 1605045 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1007 12:42:50.744102 1605045 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1007 12:42:50.744207 1605045 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1007 12:42:50.754128 1605045 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1007 12:42:50.764787 1605045 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1007 12:42:50.774741 1605045 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1007 12:42:50.784634 1605045 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1007 12:42:50.793869 1605045 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1007 12:42:50.803977 1605045 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1007 12:42:50.812614 1605045 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1007 12:42:50.821205 1605045 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1007 12:42:50.915211 1605045 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1007 12:42:51.103792 1605045 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I1007 12:42:51.103908 1605045 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1007 12:42:51.108802 1605045 start.go:563] Will wait 60s for crictl version
	I1007 12:42:51.108901 1605045 ssh_runner.go:195] Run: which crictl
	I1007 12:42:51.112602 1605045 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1007 12:42:51.152815 1605045 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.7.22
	RuntimeApiVersion:  v1
	I1007 12:42:51.152905 1605045 ssh_runner.go:195] Run: containerd --version
	I1007 12:42:51.184501 1605045 ssh_runner.go:195] Run: containerd --version
	I1007 12:42:51.211393 1605045 out.go:177] * Preparing Kubernetes v1.20.0 on containerd 1.7.22 ...
	I1007 12:42:51.214158 1605045 cli_runner.go:164] Run: docker network inspect old-k8s-version-130031 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1007 12:42:51.228984 1605045 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1007 12:42:51.232694 1605045 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1007 12:42:51.244424 1605045 kubeadm.go:883] updating cluster {Name:old-k8s-version-130031 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-130031 Namespace:default APIServerHAVIP: APIServerName:mini
kubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:
/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1007 12:42:51.244570 1605045 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime containerd
	I1007 12:42:51.244640 1605045 ssh_runner.go:195] Run: sudo crictl images --output json
	I1007 12:42:51.292325 1605045 containerd.go:627] all images are preloaded for containerd runtime.
	I1007 12:42:51.292347 1605045 containerd.go:534] Images already preloaded, skipping extraction
	I1007 12:42:51.292418 1605045 ssh_runner.go:195] Run: sudo crictl images --output json
	I1007 12:42:51.344228 1605045 containerd.go:627] all images are preloaded for containerd runtime.
	I1007 12:42:51.344252 1605045 cache_images.go:84] Images are preloaded, skipping loading
	I1007 12:42:51.344260 1605045 kubeadm.go:934] updating node { 192.168.85.2 8443 v1.20.0 containerd true true} ...
	I1007 12:42:51.344426 1605045 kubeadm.go:946] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=old-k8s-version-130031 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-130031 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1007 12:42:51.344499 1605045 ssh_runner.go:195] Run: sudo crictl info
	I1007 12:42:51.383226 1605045 cni.go:84] Creating CNI manager for ""
	I1007 12:42:51.383255 1605045 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1007 12:42:51.383270 1605045 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1007 12:42:51.383291 1605045 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-130031 NodeName:old-k8s-version-130031 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt
StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I1007 12:42:51.383467 1605045 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "old-k8s-version-130031"
	  kubeletExtraArgs:
	    node-ip: 192.168.85.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1007 12:42:51.383577 1605045 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I1007 12:42:51.398574 1605045 binaries.go:44] Found k8s binaries, skipping transfer
	I1007 12:42:51.398694 1605045 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1007 12:42:51.407878 1605045 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (442 bytes)
	I1007 12:42:51.428158 1605045 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1007 12:42:51.446642 1605045 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2125 bytes)
	I1007 12:42:51.466205 1605045 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1007 12:42:51.470087 1605045 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1007 12:42:51.481276 1605045 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1007 12:42:51.570472 1605045 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1007 12:42:51.588011 1605045 certs.go:68] Setting up /home/jenkins/minikube-integration/19763-1394934/.minikube/profiles/old-k8s-version-130031 for IP: 192.168.85.2
	I1007 12:42:51.588046 1605045 certs.go:194] generating shared ca certs ...
	I1007 12:42:51.588064 1605045 certs.go:226] acquiring lock for ca certs: {Name:mk4964dcb525e1a3c94069cf2fb52c246bc0ce74 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 12:42:51.588229 1605045 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19763-1394934/.minikube/ca.key
	I1007 12:42:51.588286 1605045 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19763-1394934/.minikube/proxy-client-ca.key
	I1007 12:42:51.588298 1605045 certs.go:256] generating profile certs ...
	I1007 12:42:51.588390 1605045 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19763-1394934/.minikube/profiles/old-k8s-version-130031/client.key
	I1007 12:42:51.588475 1605045 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19763-1394934/.minikube/profiles/old-k8s-version-130031/apiserver.key.dae3067e
	I1007 12:42:51.588529 1605045 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19763-1394934/.minikube/profiles/old-k8s-version-130031/proxy-client.key
	I1007 12:42:51.588659 1605045 certs.go:484] found cert: /home/jenkins/minikube-integration/19763-1394934/.minikube/certs/1400308.pem (1338 bytes)
	W1007 12:42:51.588694 1605045 certs.go:480] ignoring /home/jenkins/minikube-integration/19763-1394934/.minikube/certs/1400308_empty.pem, impossibly tiny 0 bytes
	I1007 12:42:51.588707 1605045 certs.go:484] found cert: /home/jenkins/minikube-integration/19763-1394934/.minikube/certs/ca-key.pem (1679 bytes)
	I1007 12:42:51.588743 1605045 certs.go:484] found cert: /home/jenkins/minikube-integration/19763-1394934/.minikube/certs/ca.pem (1078 bytes)
	I1007 12:42:51.588774 1605045 certs.go:484] found cert: /home/jenkins/minikube-integration/19763-1394934/.minikube/certs/cert.pem (1123 bytes)
	I1007 12:42:51.588812 1605045 certs.go:484] found cert: /home/jenkins/minikube-integration/19763-1394934/.minikube/certs/key.pem (1675 bytes)
	I1007 12:42:51.588862 1605045 certs.go:484] found cert: /home/jenkins/minikube-integration/19763-1394934/.minikube/files/etc/ssl/certs/14003082.pem (1708 bytes)
	I1007 12:42:51.589594 1605045 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-1394934/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1007 12:42:51.620141 1605045 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-1394934/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1007 12:42:51.648854 1605045 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-1394934/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1007 12:42:51.676186 1605045 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-1394934/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1007 12:42:51.704053 1605045 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-1394934/.minikube/profiles/old-k8s-version-130031/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1007 12:42:51.737324 1605045 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-1394934/.minikube/profiles/old-k8s-version-130031/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1007 12:42:51.764426 1605045 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-1394934/.minikube/profiles/old-k8s-version-130031/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1007 12:42:51.792285 1605045 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-1394934/.minikube/profiles/old-k8s-version-130031/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1007 12:42:51.819604 1605045 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-1394934/.minikube/files/etc/ssl/certs/14003082.pem --> /usr/share/ca-certificates/14003082.pem (1708 bytes)
	I1007 12:42:51.845182 1605045 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-1394934/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1007 12:42:51.870479 1605045 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-1394934/.minikube/certs/1400308.pem --> /usr/share/ca-certificates/1400308.pem (1338 bytes)
	I1007 12:42:51.896080 1605045 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1007 12:42:51.915919 1605045 ssh_runner.go:195] Run: openssl version
	I1007 12:42:51.923112 1605045 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14003082.pem && ln -fs /usr/share/ca-certificates/14003082.pem /etc/ssl/certs/14003082.pem"
	I1007 12:42:51.932936 1605045 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14003082.pem
	I1007 12:42:51.936580 1605045 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  7 12:04 /usr/share/ca-certificates/14003082.pem
	I1007 12:42:51.936651 1605045 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14003082.pem
	I1007 12:42:51.943716 1605045 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/14003082.pem /etc/ssl/certs/3ec20f2e.0"
	I1007 12:42:51.953119 1605045 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1007 12:42:51.963297 1605045 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1007 12:42:51.967071 1605045 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  7 11:53 /usr/share/ca-certificates/minikubeCA.pem
	I1007 12:42:51.967143 1605045 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1007 12:42:51.974776 1605045 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1007 12:42:51.984186 1605045 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1400308.pem && ln -fs /usr/share/ca-certificates/1400308.pem /etc/ssl/certs/1400308.pem"
	I1007 12:42:51.993871 1605045 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1400308.pem
	I1007 12:42:51.997797 1605045 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  7 12:04 /usr/share/ca-certificates/1400308.pem
	I1007 12:42:51.997895 1605045 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1400308.pem
	I1007 12:42:52.006024 1605045 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1400308.pem /etc/ssl/certs/51391683.0"
	I1007 12:42:52.016460 1605045 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1007 12:42:52.020494 1605045 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1007 12:42:52.027825 1605045 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1007 12:42:52.035339 1605045 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1007 12:42:52.043359 1605045 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1007 12:42:52.050552 1605045 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1007 12:42:52.057763 1605045 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1007 12:42:52.065483 1605045 kubeadm.go:392] StartCluster: {Name:old-k8s-version-130031 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-130031 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/ho
me/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1007 12:42:52.065596 1605045 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1007 12:42:52.065670 1605045 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1007 12:42:52.105201 1605045 cri.go:89] found id: "8805bd4c8f5e006eb0f0abb45a40c3d0696ac379978061faa21852c4370a056a"
	I1007 12:42:52.105224 1605045 cri.go:89] found id: "9e5fd41f9e9f5bb083bcc0a91b11894d2f6d83235dbe7b915428c7f7f5fe0fa1"
	I1007 12:42:52.105229 1605045 cri.go:89] found id: "53af0c63bb22ef1e818eef33ce3a5ef086d1134861658789776a3bd1bb2b8718"
	I1007 12:42:52.105233 1605045 cri.go:89] found id: "1d7b57cafcedf477410702249e618a5b428f449129dfe864c4e92565b31aa7da"
	I1007 12:42:52.105236 1605045 cri.go:89] found id: "087491a883dc7a00d19cb0b6f1aa66d75a5fdf4285af40a17d54c064db2cfb38"
	I1007 12:42:52.105240 1605045 cri.go:89] found id: "1cda9d232b1a29681f59212c1398e49986b28471693e5b8ec3e1096d4b08664b"
	I1007 12:42:52.105243 1605045 cri.go:89] found id: "56445c334390ae25dcd546cbbe4c491fdc1651b3a5a4d8815c363c7a8bfbb990"
	I1007 12:42:52.105246 1605045 cri.go:89] found id: "1242dd7b1b01beccce95d3398556baedca1c8a45e6c29f05bf39e03c50f5f0ca"
	I1007 12:42:52.105249 1605045 cri.go:89] found id: ""
	I1007 12:42:52.105302 1605045 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I1007 12:42:52.118327 1605045 cri.go:116] JSON = null
	W1007 12:42:52.118396 1605045 kubeadm.go:399] unpause failed: list paused: list returned 0 containers, but ps returned 8
	I1007 12:42:52.118479 1605045 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1007 12:42:52.127823 1605045 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1007 12:42:52.127887 1605045 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1007 12:42:52.127946 1605045 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1007 12:42:52.136862 1605045 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1007 12:42:52.137526 1605045 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-130031" does not appear in /home/jenkins/minikube-integration/19763-1394934/kubeconfig
	I1007 12:42:52.137840 1605045 kubeconfig.go:62] /home/jenkins/minikube-integration/19763-1394934/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-130031" cluster setting kubeconfig missing "old-k8s-version-130031" context setting]
	I1007 12:42:52.138297 1605045 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19763-1394934/kubeconfig: {Name:mkef6c987beefaa5e568c1a78e7d094f26b41d37 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 12:42:52.139768 1605045 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1007 12:42:52.149160 1605045 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.85.2
	I1007 12:42:52.149191 1605045 kubeadm.go:597] duration metric: took 21.291505ms to restartPrimaryControlPlane
	I1007 12:42:52.149201 1605045 kubeadm.go:394] duration metric: took 83.729398ms to StartCluster
	I1007 12:42:52.149217 1605045 settings.go:142] acquiring lock: {Name:mk92e55c8b3391b1d94595f100e47ff9f6bf1d13 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 12:42:52.149280 1605045 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19763-1394934/kubeconfig
	I1007 12:42:52.150168 1605045 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19763-1394934/kubeconfig: {Name:mkef6c987beefaa5e568c1a78e7d094f26b41d37 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 12:42:52.150367 1605045 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1007 12:42:52.150637 1605045 config.go:182] Loaded profile config "old-k8s-version-130031": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
	I1007 12:42:52.150678 1605045 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1007 12:42:52.150789 1605045 addons.go:69] Setting storage-provisioner=true in profile "old-k8s-version-130031"
	I1007 12:42:52.150808 1605045 addons.go:234] Setting addon storage-provisioner=true in "old-k8s-version-130031"
	W1007 12:42:52.150819 1605045 addons.go:243] addon storage-provisioner should already be in state true
	I1007 12:42:52.150814 1605045 addons.go:69] Setting default-storageclass=true in profile "old-k8s-version-130031"
	I1007 12:42:52.150891 1605045 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-130031"
	I1007 12:42:52.150841 1605045 host.go:66] Checking if "old-k8s-version-130031" exists ...
	I1007 12:42:52.151283 1605045 cli_runner.go:164] Run: docker container inspect old-k8s-version-130031 --format={{.State.Status}}
	I1007 12:42:52.151480 1605045 cli_runner.go:164] Run: docker container inspect old-k8s-version-130031 --format={{.State.Status}}
	I1007 12:42:52.150846 1605045 addons.go:69] Setting metrics-server=true in profile "old-k8s-version-130031"
	I1007 12:42:52.151938 1605045 addons.go:234] Setting addon metrics-server=true in "old-k8s-version-130031"
	W1007 12:42:52.151949 1605045 addons.go:243] addon metrics-server should already be in state true
	I1007 12:42:52.151976 1605045 host.go:66] Checking if "old-k8s-version-130031" exists ...
	I1007 12:42:52.152380 1605045 cli_runner.go:164] Run: docker container inspect old-k8s-version-130031 --format={{.State.Status}}
	I1007 12:42:52.150851 1605045 addons.go:69] Setting dashboard=true in profile "old-k8s-version-130031"
	I1007 12:42:52.155839 1605045 addons.go:234] Setting addon dashboard=true in "old-k8s-version-130031"
	W1007 12:42:52.155852 1605045 addons.go:243] addon dashboard should already be in state true
	I1007 12:42:52.155887 1605045 host.go:66] Checking if "old-k8s-version-130031" exists ...
	I1007 12:42:52.156414 1605045 cli_runner.go:164] Run: docker container inspect old-k8s-version-130031 --format={{.State.Status}}
	I1007 12:42:52.156884 1605045 out.go:177] * Verifying Kubernetes components...
	I1007 12:42:52.167834 1605045 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1007 12:42:52.204233 1605045 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1007 12:42:52.204363 1605045 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1007 12:42:52.207047 1605045 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1007 12:42:52.207066 1605045 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1007 12:42:52.207123 1605045 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-130031
	I1007 12:42:52.207430 1605045 addons.go:234] Setting addon default-storageclass=true in "old-k8s-version-130031"
	W1007 12:42:52.207444 1605045 addons.go:243] addon default-storageclass should already be in state true
	I1007 12:42:52.207469 1605045 host.go:66] Checking if "old-k8s-version-130031" exists ...
	I1007 12:42:52.207915 1605045 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1007 12:42:52.207927 1605045 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1007 12:42:52.207970 1605045 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-130031
	I1007 12:42:52.208430 1605045 cli_runner.go:164] Run: docker container inspect old-k8s-version-130031 --format={{.State.Status}}
	I1007 12:42:52.221005 1605045 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I1007 12:42:52.223788 1605045 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1007 12:42:52.226468 1605045 addons.go:431] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1007 12:42:52.226494 1605045 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1007 12:42:52.226563 1605045 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-130031
	I1007 12:42:52.259663 1605045 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1007 12:42:52.259689 1605045 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1007 12:42:52.259759 1605045 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-130031
	I1007 12:42:52.264925 1605045 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38186 SSHKeyPath:/home/jenkins/minikube-integration/19763-1394934/.minikube/machines/old-k8s-version-130031/id_rsa Username:docker}
	I1007 12:42:52.300947 1605045 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38186 SSHKeyPath:/home/jenkins/minikube-integration/19763-1394934/.minikube/machines/old-k8s-version-130031/id_rsa Username:docker}
	I1007 12:42:52.312984 1605045 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38186 SSHKeyPath:/home/jenkins/minikube-integration/19763-1394934/.minikube/machines/old-k8s-version-130031/id_rsa Username:docker}
	I1007 12:42:52.339756 1605045 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38186 SSHKeyPath:/home/jenkins/minikube-integration/19763-1394934/.minikube/machines/old-k8s-version-130031/id_rsa Username:docker}
	I1007 12:42:52.353448 1605045 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1007 12:42:52.389245 1605045 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-130031" to be "Ready" ...
	I1007 12:42:52.415929 1605045 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1007 12:42:52.458970 1605045 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1007 12:42:52.459038 1605045 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1007 12:42:52.491224 1605045 addons.go:431] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1007 12:42:52.491292 1605045 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1007 12:42:52.500042 1605045 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1007 12:42:52.500111 1605045 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1007 12:42:52.526093 1605045 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1007 12:42:52.526178 1605045 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1007 12:42:52.529924 1605045 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1007 12:42:52.534480 1605045 addons.go:431] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1007 12:42:52.534552 1605045 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1007 12:42:52.558937 1605045 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1007 12:42:52.579031 1605045 addons.go:431] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1007 12:42:52.579112 1605045 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	W1007 12:42:52.583242 1605045 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1007 12:42:52.583333 1605045 retry.go:31] will retry after 241.55294ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1007 12:42:52.645278 1605045 addons.go:431] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1007 12:42:52.645357 1605045 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1007 12:42:52.679627 1605045 addons.go:431] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1007 12:42:52.679710 1605045 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	W1007 12:42:52.700533 1605045 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W1007 12:42:52.700633 1605045 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1007 12:42:52.700708 1605045 retry.go:31] will retry after 136.051444ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1007 12:42:52.700740 1605045 retry.go:31] will retry after 136.06365ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1007 12:42:52.701005 1605045 addons.go:431] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1007 12:42:52.701053 1605045 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1007 12:42:52.720770 1605045 addons.go:431] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1007 12:42:52.720842 1605045 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1007 12:42:52.739464 1605045 addons.go:431] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1007 12:42:52.739488 1605045 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1007 12:42:52.758579 1605045 addons.go:431] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1007 12:42:52.758651 1605045 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1007 12:42:52.778079 1605045 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1007 12:42:52.825272 1605045 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1007 12:42:52.837659 1605045 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1007 12:42:52.837826 1605045 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W1007 12:42:52.882956 1605045 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1007 12:42:52.882993 1605045 retry.go:31] will retry after 250.079193ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W1007 12:42:52.973105 1605045 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1007 12:42:52.973142 1605045 retry.go:31] will retry after 207.366226ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W1007 12:42:53.010756 1605045 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W1007 12:42:53.010791 1605045 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1007 12:42:53.010813 1605045 retry.go:31] will retry after 421.440513ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1007 12:42:53.010812 1605045 retry.go:31] will retry after 310.116869ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1007 12:42:53.134049 1605045 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1007 12:42:53.181425 1605045 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1007 12:42:53.225237 1605045 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1007 12:42:53.225284 1605045 retry.go:31] will retry after 236.080528ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W1007 12:42:53.274185 1605045 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1007 12:42:53.274224 1605045 retry.go:31] will retry after 510.509547ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1007 12:42:53.321331 1605045 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1007 12:42:53.395211 1605045 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1007 12:42:53.395247 1605045 retry.go:31] will retry after 387.066246ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1007 12:42:53.433415 1605045 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1007 12:42:53.461851 1605045 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1007 12:42:53.545198 1605045 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1007 12:42:53.545239 1605045 retry.go:31] will retry after 727.603289ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W1007 12:42:53.568550 1605045 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1007 12:42:53.568587 1605045 retry.go:31] will retry after 461.857192ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1007 12:42:53.782856 1605045 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1007 12:42:53.785238 1605045 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1007 12:42:53.871883 1605045 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1007 12:42:53.871934 1605045 retry.go:31] will retry after 616.553761ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W1007 12:42:53.900206 1605045 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1007 12:42:53.900246 1605045 retry.go:31] will retry after 501.353602ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1007 12:42:54.031658 1605045 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1007 12:42:54.113639 1605045 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1007 12:42:54.113684 1605045 retry.go:31] will retry after 1.195827298s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1007 12:42:54.274036 1605045 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W1007 12:42:54.354168 1605045 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1007 12:42:54.354203 1605045 retry.go:31] will retry after 575.588658ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1007 12:42:54.390643 1605045 node_ready.go:53] error getting node "old-k8s-version-130031": Get "https://192.168.85.2:8443/api/v1/nodes/old-k8s-version-130031": dial tcp 192.168.85.2:8443: connect: connection refused
	I1007 12:42:54.401939 1605045 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1007 12:42:54.473064 1605045 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1007 12:42:54.473147 1605045 retry.go:31] will retry after 963.072919ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1007 12:42:54.489349 1605045 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1007 12:42:54.564564 1605045 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1007 12:42:54.564609 1605045 retry.go:31] will retry after 758.097733ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1007 12:42:54.929993 1605045 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W1007 12:42:55.006793 1605045 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1007 12:42:55.006840 1605045 retry.go:31] will retry after 673.046607ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1007 12:42:55.310246 1605045 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1007 12:42:55.323646 1605045 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1007 12:42:55.398243 1605045 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1007 12:42:55.398277 1605045 retry.go:31] will retry after 1.612587488s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W1007 12:42:55.421372 1605045 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1007 12:42:55.421415 1605045 retry.go:31] will retry after 1.10843508s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1007 12:42:55.436516 1605045 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1007 12:42:55.512371 1605045 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1007 12:42:55.512404 1605045 retry.go:31] will retry after 2.567546823s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1007 12:42:55.680766 1605045 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W1007 12:42:55.757480 1605045 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1007 12:42:55.757513 1605045 retry.go:31] will retry after 2.002852124s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1007 12:42:56.530040 1605045 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1007 12:42:56.625465 1605045 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1007 12:42:56.625495 1605045 retry.go:31] will retry after 3.891100884s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1007 12:42:56.890408 1605045 node_ready.go:53] error getting node "old-k8s-version-130031": Get "https://192.168.85.2:8443/api/v1/nodes/old-k8s-version-130031": dial tcp 192.168.85.2:8443: connect: connection refused
	I1007 12:42:57.011761 1605045 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1007 12:42:57.098425 1605045 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1007 12:42:57.098463 1605045 retry.go:31] will retry after 990.907863ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1007 12:42:57.760558 1605045 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W1007 12:42:57.840665 1605045 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1007 12:42:57.840696 1605045 retry.go:31] will retry after 2.237249445s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1007 12:42:58.080635 1605045 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1007 12:42:58.089951 1605045 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1007 12:42:58.212937 1605045 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1007 12:42:58.212974 1605045 retry.go:31] will retry after 3.381316308s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W1007 12:42:58.241492 1605045 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1007 12:42:58.241526 1605045 retry.go:31] will retry after 3.614605632s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1007 12:42:59.389750 1605045 node_ready.go:53] error getting node "old-k8s-version-130031": Get "https://192.168.85.2:8443/api/v1/nodes/old-k8s-version-130031": dial tcp 192.168.85.2:8443: connect: connection refused
	I1007 12:43:00.078473 1605045 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W1007 12:43:00.367878 1605045 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1007 12:43:00.367920 1605045 retry.go:31] will retry after 5.661104177s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1007 12:43:00.516976 1605045 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1007 12:43:00.654168 1605045 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1007 12:43:00.654204 1605045 retry.go:31] will retry after 5.044831124s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1007 12:43:01.395978 1605045 node_ready.go:53] error getting node "old-k8s-version-130031": Get "https://192.168.85.2:8443/api/v1/nodes/old-k8s-version-130031": dial tcp 192.168.85.2:8443: connect: connection refused
	I1007 12:43:01.595377 1605045 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1007 12:43:01.856321 1605045 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1007 12:43:05.700143 1605045 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1007 12:43:06.029762 1605045 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1007 12:43:12.243654 1605045 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: (10.64823095s)
	W1007 12:43:12.243710 1605045 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	Unable to connect to the server: net/http: TLS handshake timeout
	I1007 12:43:12.243729 1605045 retry.go:31] will retry after 4.188356107s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	Unable to connect to the server: net/http: TLS handshake timeout
	I1007 12:43:12.390279 1605045 node_ready.go:53] error getting node "old-k8s-version-130031": Get "https://192.168.85.2:8443/api/v1/nodes/old-k8s-version-130031": net/http: TLS handshake timeout
	I1007 12:43:12.395699 1605045 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (10.539322345s)
	W1007 12:43:12.395746 1605045 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	Unable to connect to the server: net/http: TLS handshake timeout
	I1007 12:43:12.395769 1605045 retry.go:31] will retry after 5.242449461s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	Unable to connect to the server: net/http: TLS handshake timeout
	I1007 12:43:13.933444 1605045 node_ready.go:49] node "old-k8s-version-130031" has status "Ready":"True"
	I1007 12:43:13.933484 1605045 node_ready.go:38] duration metric: took 21.54414075s for node "old-k8s-version-130031" to be "Ready" ...
	I1007 12:43:13.933495 1605045 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1007 12:43:13.983173 1605045 pod_ready.go:79] waiting up to 6m0s for pod "coredns-74ff55c5b-466qx" in "kube-system" namespace to be "Ready" ...
	I1007 12:43:14.016416 1605045 pod_ready.go:93] pod "coredns-74ff55c5b-466qx" in "kube-system" namespace has status "Ready":"True"
	I1007 12:43:14.016449 1605045 pod_ready.go:82] duration metric: took 33.233811ms for pod "coredns-74ff55c5b-466qx" in "kube-system" namespace to be "Ready" ...
	I1007 12:43:14.016462 1605045 pod_ready.go:79] waiting up to 6m0s for pod "etcd-old-k8s-version-130031" in "kube-system" namespace to be "Ready" ...
	I1007 12:43:14.032906 1605045 pod_ready.go:93] pod "etcd-old-k8s-version-130031" in "kube-system" namespace has status "Ready":"True"
	I1007 12:43:14.032943 1605045 pod_ready.go:82] duration metric: took 16.344908ms for pod "etcd-old-k8s-version-130031" in "kube-system" namespace to be "Ready" ...
	I1007 12:43:14.032959 1605045 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-old-k8s-version-130031" in "kube-system" namespace to be "Ready" ...
	I1007 12:43:14.098364 1605045 pod_ready.go:93] pod "kube-apiserver-old-k8s-version-130031" in "kube-system" namespace has status "Ready":"True"
	I1007 12:43:14.098391 1605045 pod_ready.go:82] duration metric: took 65.423622ms for pod "kube-apiserver-old-k8s-version-130031" in "kube-system" namespace to be "Ready" ...
	I1007 12:43:14.098404 1605045 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-old-k8s-version-130031" in "kube-system" namespace to be "Ready" ...
	I1007 12:43:14.166825 1605045 pod_ready.go:93] pod "kube-controller-manager-old-k8s-version-130031" in "kube-system" namespace has status "Ready":"True"
	I1007 12:43:14.166908 1605045 pod_ready.go:82] duration metric: took 68.495632ms for pod "kube-controller-manager-old-k8s-version-130031" in "kube-system" namespace to be "Ready" ...
	I1007 12:43:14.166940 1605045 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-zkws6" in "kube-system" namespace to be "Ready" ...
	I1007 12:43:14.185510 1605045 pod_ready.go:93] pod "kube-proxy-zkws6" in "kube-system" namespace has status "Ready":"True"
	I1007 12:43:14.185601 1605045 pod_ready.go:82] duration metric: took 18.639562ms for pod "kube-proxy-zkws6" in "kube-system" namespace to be "Ready" ...
	I1007 12:43:14.185640 1605045 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-old-k8s-version-130031" in "kube-system" namespace to be "Ready" ...
	I1007 12:43:14.515649 1605045 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: (8.815466071s)
	I1007 12:43:14.515758 1605045 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (8.485959941s)
	I1007 12:43:14.515784 1605045 addons.go:475] Verifying addon metrics-server=true in "old-k8s-version-130031"
	I1007 12:43:16.193779 1605045 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-130031" in "kube-system" namespace has status "Ready":"False"
	I1007 12:43:16.432213 1605045 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1007 12:43:17.639241 1605045 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1007 12:43:18.202874 1605045 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-130031" in "kube-system" namespace has status "Ready":"False"
	I1007 12:43:18.313110 1605045 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p old-k8s-version-130031 addons enable metrics-server
	
	I1007 12:43:18.314504 1605045 out.go:177] * Enabled addons: metrics-server, default-storageclass, storage-provisioner, dashboard
	I1007 12:43:18.315998 1605045 addons.go:510] duration metric: took 26.165311339s for enable addons: enabled=[metrics-server default-storageclass storage-provisioner dashboard]
	I1007 12:43:20.691667 1605045 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-130031" in "kube-system" namespace has status "Ready":"False"
	I1007 12:43:22.692301 1605045 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-130031" in "kube-system" namespace has status "Ready":"False"
	I1007 12:43:24.692439 1605045 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-130031" in "kube-system" namespace has status "Ready":"False"
	I1007 12:43:26.694342 1605045 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-130031" in "kube-system" namespace has status "Ready":"False"
	I1007 12:43:28.696486 1605045 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-130031" in "kube-system" namespace has status "Ready":"False"
	I1007 12:43:31.193245 1605045 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-130031" in "kube-system" namespace has status "Ready":"False"
	I1007 12:43:33.694494 1605045 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-130031" in "kube-system" namespace has status "Ready":"False"
	I1007 12:43:36.193560 1605045 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-130031" in "kube-system" namespace has status "Ready":"False"
	I1007 12:43:38.694060 1605045 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-130031" in "kube-system" namespace has status "Ready":"False"
	I1007 12:43:41.193309 1605045 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-130031" in "kube-system" namespace has status "Ready":"False"
	I1007 12:43:43.193418 1605045 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-130031" in "kube-system" namespace has status "Ready":"False"
	I1007 12:43:45.692074 1605045 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-130031" in "kube-system" namespace has status "Ready":"False"
	I1007 12:43:47.696891 1605045 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-130031" in "kube-system" namespace has status "Ready":"False"
	I1007 12:43:50.193757 1605045 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-130031" in "kube-system" namespace has status "Ready":"False"
	I1007 12:43:52.692775 1605045 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-130031" in "kube-system" namespace has status "Ready":"False"
	I1007 12:43:55.192375 1605045 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-130031" in "kube-system" namespace has status "Ready":"False"
	I1007 12:43:57.692613 1605045 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-130031" in "kube-system" namespace has status "Ready":"False"
	I1007 12:44:00.260854 1605045 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-130031" in "kube-system" namespace has status "Ready":"False"
	I1007 12:44:02.691960 1605045 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-130031" in "kube-system" namespace has status "Ready":"False"
	I1007 12:44:04.692386 1605045 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-130031" in "kube-system" namespace has status "Ready":"False"
	I1007 12:44:07.191827 1605045 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-130031" in "kube-system" namespace has status "Ready":"False"
	I1007 12:44:09.192434 1605045 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-130031" in "kube-system" namespace has status "Ready":"False"
	I1007 12:44:11.693506 1605045 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-130031" in "kube-system" namespace has status "Ready":"False"
	I1007 12:44:14.192315 1605045 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-130031" in "kube-system" namespace has status "Ready":"False"
	I1007 12:44:16.692393 1605045 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-130031" in "kube-system" namespace has status "Ready":"False"
	I1007 12:44:19.192107 1605045 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-130031" in "kube-system" namespace has status "Ready":"False"
	I1007 12:44:21.192536 1605045 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-130031" in "kube-system" namespace has status "Ready":"False"
	I1007 12:44:23.192605 1605045 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-130031" in "kube-system" namespace has status "Ready":"False"
	I1007 12:44:25.693113 1605045 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-130031" in "kube-system" namespace has status "Ready":"False"
	I1007 12:44:28.191919 1605045 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-130031" in "kube-system" namespace has status "Ready":"False"
	I1007 12:44:30.201687 1605045 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-130031" in "kube-system" namespace has status "Ready":"False"
	I1007 12:44:32.691674 1605045 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-130031" in "kube-system" namespace has status "Ready":"False"
	I1007 12:44:33.691679 1605045 pod_ready.go:93] pod "kube-scheduler-old-k8s-version-130031" in "kube-system" namespace has status "Ready":"True"
	I1007 12:44:33.691705 1605045 pod_ready.go:82] duration metric: took 1m19.506025404s for pod "kube-scheduler-old-k8s-version-130031" in "kube-system" namespace to be "Ready" ...
	I1007 12:44:33.691717 1605045 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-9975d5f86-vkrtw" in "kube-system" namespace to be "Ready" ...
	I1007 12:44:35.697303 1605045 pod_ready.go:103] pod "metrics-server-9975d5f86-vkrtw" in "kube-system" namespace has status "Ready":"False"
	I1007 12:44:37.700013 1605045 pod_ready.go:103] pod "metrics-server-9975d5f86-vkrtw" in "kube-system" namespace has status "Ready":"False"
	I1007 12:44:40.199665 1605045 pod_ready.go:103] pod "metrics-server-9975d5f86-vkrtw" in "kube-system" namespace has status "Ready":"False"
	I1007 12:44:42.697609 1605045 pod_ready.go:103] pod "metrics-server-9975d5f86-vkrtw" in "kube-system" namespace has status "Ready":"False"
	I1007 12:44:44.699302 1605045 pod_ready.go:103] pod "metrics-server-9975d5f86-vkrtw" in "kube-system" namespace has status "Ready":"False"
	I1007 12:44:47.203169 1605045 pod_ready.go:103] pod "metrics-server-9975d5f86-vkrtw" in "kube-system" namespace has status "Ready":"False"
	I1007 12:44:49.701442 1605045 pod_ready.go:103] pod "metrics-server-9975d5f86-vkrtw" in "kube-system" namespace has status "Ready":"False"
	I1007 12:44:52.199748 1605045 pod_ready.go:103] pod "metrics-server-9975d5f86-vkrtw" in "kube-system" namespace has status "Ready":"False"
	I1007 12:44:54.701076 1605045 pod_ready.go:103] pod "metrics-server-9975d5f86-vkrtw" in "kube-system" namespace has status "Ready":"False"
	I1007 12:44:57.198747 1605045 pod_ready.go:103] pod "metrics-server-9975d5f86-vkrtw" in "kube-system" namespace has status "Ready":"False"
	I1007 12:44:59.199738 1605045 pod_ready.go:103] pod "metrics-server-9975d5f86-vkrtw" in "kube-system" namespace has status "Ready":"False"
	I1007 12:45:01.702620 1605045 pod_ready.go:103] pod "metrics-server-9975d5f86-vkrtw" in "kube-system" namespace has status "Ready":"False"
	I1007 12:45:04.202074 1605045 pod_ready.go:103] pod "metrics-server-9975d5f86-vkrtw" in "kube-system" namespace has status "Ready":"False"
	I1007 12:45:06.697774 1605045 pod_ready.go:103] pod "metrics-server-9975d5f86-vkrtw" in "kube-system" namespace has status "Ready":"False"
	I1007 12:45:09.197294 1605045 pod_ready.go:103] pod "metrics-server-9975d5f86-vkrtw" in "kube-system" namespace has status "Ready":"False"
	I1007 12:45:11.198135 1605045 pod_ready.go:103] pod "metrics-server-9975d5f86-vkrtw" in "kube-system" namespace has status "Ready":"False"
	I1007 12:45:13.699095 1605045 pod_ready.go:103] pod "metrics-server-9975d5f86-vkrtw" in "kube-system" namespace has status "Ready":"False"
	I1007 12:45:16.198320 1605045 pod_ready.go:103] pod "metrics-server-9975d5f86-vkrtw" in "kube-system" namespace has status "Ready":"False"
	I1007 12:45:18.199073 1605045 pod_ready.go:103] pod "metrics-server-9975d5f86-vkrtw" in "kube-system" namespace has status "Ready":"False"
	I1007 12:45:20.199653 1605045 pod_ready.go:103] pod "metrics-server-9975d5f86-vkrtw" in "kube-system" namespace has status "Ready":"False"
	I1007 12:45:22.698908 1605045 pod_ready.go:103] pod "metrics-server-9975d5f86-vkrtw" in "kube-system" namespace has status "Ready":"False"
	I1007 12:45:24.764533 1605045 pod_ready.go:103] pod "metrics-server-9975d5f86-vkrtw" in "kube-system" namespace has status "Ready":"False"
	I1007 12:45:27.204475 1605045 pod_ready.go:103] pod "metrics-server-9975d5f86-vkrtw" in "kube-system" namespace has status "Ready":"False"
	I1007 12:45:29.697950 1605045 pod_ready.go:103] pod "metrics-server-9975d5f86-vkrtw" in "kube-system" namespace has status "Ready":"False"
	I1007 12:45:31.698174 1605045 pod_ready.go:103] pod "metrics-server-9975d5f86-vkrtw" in "kube-system" namespace has status "Ready":"False"
	I1007 12:45:34.198344 1605045 pod_ready.go:103] pod "metrics-server-9975d5f86-vkrtw" in "kube-system" namespace has status "Ready":"False"
	I1007 12:45:36.199034 1605045 pod_ready.go:103] pod "metrics-server-9975d5f86-vkrtw" in "kube-system" namespace has status "Ready":"False"
	I1007 12:45:38.698518 1605045 pod_ready.go:103] pod "metrics-server-9975d5f86-vkrtw" in "kube-system" namespace has status "Ready":"False"
	I1007 12:45:41.197739 1605045 pod_ready.go:103] pod "metrics-server-9975d5f86-vkrtw" in "kube-system" namespace has status "Ready":"False"
	I1007 12:45:43.198274 1605045 pod_ready.go:103] pod "metrics-server-9975d5f86-vkrtw" in "kube-system" namespace has status "Ready":"False"
	I1007 12:45:45.200214 1605045 pod_ready.go:103] pod "metrics-server-9975d5f86-vkrtw" in "kube-system" namespace has status "Ready":"False"
	I1007 12:45:47.698465 1605045 pod_ready.go:103] pod "metrics-server-9975d5f86-vkrtw" in "kube-system" namespace has status "Ready":"False"
	I1007 12:45:50.209612 1605045 pod_ready.go:103] pod "metrics-server-9975d5f86-vkrtw" in "kube-system" namespace has status "Ready":"False"
	I1007 12:45:52.698032 1605045 pod_ready.go:103] pod "metrics-server-9975d5f86-vkrtw" in "kube-system" namespace has status "Ready":"False"
	I1007 12:45:54.698529 1605045 pod_ready.go:103] pod "metrics-server-9975d5f86-vkrtw" in "kube-system" namespace has status "Ready":"False"
	I1007 12:45:57.199042 1605045 pod_ready.go:103] pod "metrics-server-9975d5f86-vkrtw" in "kube-system" namespace has status "Ready":"False"
	I1007 12:45:59.199947 1605045 pod_ready.go:103] pod "metrics-server-9975d5f86-vkrtw" in "kube-system" namespace has status "Ready":"False"
	I1007 12:46:01.699172 1605045 pod_ready.go:103] pod "metrics-server-9975d5f86-vkrtw" in "kube-system" namespace has status "Ready":"False"
	I1007 12:46:04.198094 1605045 pod_ready.go:103] pod "metrics-server-9975d5f86-vkrtw" in "kube-system" namespace has status "Ready":"False"
	I1007 12:46:06.200977 1605045 pod_ready.go:103] pod "metrics-server-9975d5f86-vkrtw" in "kube-system" namespace has status "Ready":"False"
	I1007 12:46:08.698731 1605045 pod_ready.go:103] pod "metrics-server-9975d5f86-vkrtw" in "kube-system" namespace has status "Ready":"False"
	I1007 12:46:11.197742 1605045 pod_ready.go:103] pod "metrics-server-9975d5f86-vkrtw" in "kube-system" namespace has status "Ready":"False"
	I1007 12:46:13.697536 1605045 pod_ready.go:103] pod "metrics-server-9975d5f86-vkrtw" in "kube-system" namespace has status "Ready":"False"
	I1007 12:46:15.698350 1605045 pod_ready.go:103] pod "metrics-server-9975d5f86-vkrtw" in "kube-system" namespace has status "Ready":"False"
	I1007 12:46:17.698902 1605045 pod_ready.go:103] pod "metrics-server-9975d5f86-vkrtw" in "kube-system" namespace has status "Ready":"False"
	I1007 12:46:20.198118 1605045 pod_ready.go:103] pod "metrics-server-9975d5f86-vkrtw" in "kube-system" namespace has status "Ready":"False"
	I1007 12:46:22.198991 1605045 pod_ready.go:103] pod "metrics-server-9975d5f86-vkrtw" in "kube-system" namespace has status "Ready":"False"
	I1007 12:46:24.698954 1605045 pod_ready.go:103] pod "metrics-server-9975d5f86-vkrtw" in "kube-system" namespace has status "Ready":"False"
	I1007 12:46:27.197821 1605045 pod_ready.go:103] pod "metrics-server-9975d5f86-vkrtw" in "kube-system" namespace has status "Ready":"False"
	I1007 12:46:29.197963 1605045 pod_ready.go:103] pod "metrics-server-9975d5f86-vkrtw" in "kube-system" namespace has status "Ready":"False"
	I1007 12:46:31.698121 1605045 pod_ready.go:103] pod "metrics-server-9975d5f86-vkrtw" in "kube-system" namespace has status "Ready":"False"
	I1007 12:46:33.698840 1605045 pod_ready.go:103] pod "metrics-server-9975d5f86-vkrtw" in "kube-system" namespace has status "Ready":"False"
	I1007 12:46:35.699055 1605045 pod_ready.go:103] pod "metrics-server-9975d5f86-vkrtw" in "kube-system" namespace has status "Ready":"False"
	I1007 12:46:38.198461 1605045 pod_ready.go:103] pod "metrics-server-9975d5f86-vkrtw" in "kube-system" namespace has status "Ready":"False"
	I1007 12:46:40.201085 1605045 pod_ready.go:103] pod "metrics-server-9975d5f86-vkrtw" in "kube-system" namespace has status "Ready":"False"
	I1007 12:46:42.697962 1605045 pod_ready.go:103] pod "metrics-server-9975d5f86-vkrtw" in "kube-system" namespace has status "Ready":"False"
	I1007 12:46:44.699060 1605045 pod_ready.go:103] pod "metrics-server-9975d5f86-vkrtw" in "kube-system" namespace has status "Ready":"False"
	I1007 12:46:47.198069 1605045 pod_ready.go:103] pod "metrics-server-9975d5f86-vkrtw" in "kube-system" namespace has status "Ready":"False"
	I1007 12:46:49.697694 1605045 pod_ready.go:103] pod "metrics-server-9975d5f86-vkrtw" in "kube-system" namespace has status "Ready":"False"
	I1007 12:46:51.698530 1605045 pod_ready.go:103] pod "metrics-server-9975d5f86-vkrtw" in "kube-system" namespace has status "Ready":"False"
	I1007 12:46:53.712643 1605045 pod_ready.go:103] pod "metrics-server-9975d5f86-vkrtw" in "kube-system" namespace has status "Ready":"False"
	I1007 12:46:56.198411 1605045 pod_ready.go:103] pod "metrics-server-9975d5f86-vkrtw" in "kube-system" namespace has status "Ready":"False"
	I1007 12:46:58.698498 1605045 pod_ready.go:103] pod "metrics-server-9975d5f86-vkrtw" in "kube-system" namespace has status "Ready":"False"
	I1007 12:47:00.698549 1605045 pod_ready.go:103] pod "metrics-server-9975d5f86-vkrtw" in "kube-system" namespace has status "Ready":"False"
	I1007 12:47:02.698653 1605045 pod_ready.go:103] pod "metrics-server-9975d5f86-vkrtw" in "kube-system" namespace has status "Ready":"False"
	I1007 12:47:05.199908 1605045 pod_ready.go:103] pod "metrics-server-9975d5f86-vkrtw" in "kube-system" namespace has status "Ready":"False"
	I1007 12:47:07.699357 1605045 pod_ready.go:103] pod "metrics-server-9975d5f86-vkrtw" in "kube-system" namespace has status "Ready":"False"
	I1007 12:47:10.198724 1605045 pod_ready.go:103] pod "metrics-server-9975d5f86-vkrtw" in "kube-system" namespace has status "Ready":"False"
	I1007 12:47:12.199314 1605045 pod_ready.go:103] pod "metrics-server-9975d5f86-vkrtw" in "kube-system" namespace has status "Ready":"False"
	I1007 12:47:14.698621 1605045 pod_ready.go:103] pod "metrics-server-9975d5f86-vkrtw" in "kube-system" namespace has status "Ready":"False"
	I1007 12:47:16.698797 1605045 pod_ready.go:103] pod "metrics-server-9975d5f86-vkrtw" in "kube-system" namespace has status "Ready":"False"
	I1007 12:47:19.197468 1605045 pod_ready.go:103] pod "metrics-server-9975d5f86-vkrtw" in "kube-system" namespace has status "Ready":"False"
	I1007 12:47:21.197672 1605045 pod_ready.go:103] pod "metrics-server-9975d5f86-vkrtw" in "kube-system" namespace has status "Ready":"False"
	I1007 12:47:23.198193 1605045 pod_ready.go:103] pod "metrics-server-9975d5f86-vkrtw" in "kube-system" namespace has status "Ready":"False"
	I1007 12:47:25.779910 1605045 pod_ready.go:103] pod "metrics-server-9975d5f86-vkrtw" in "kube-system" namespace has status "Ready":"False"
	I1007 12:47:28.199890 1605045 pod_ready.go:103] pod "metrics-server-9975d5f86-vkrtw" in "kube-system" namespace has status "Ready":"False"
	I1007 12:47:30.698423 1605045 pod_ready.go:103] pod "metrics-server-9975d5f86-vkrtw" in "kube-system" namespace has status "Ready":"False"
	I1007 12:47:32.699068 1605045 pod_ready.go:103] pod "metrics-server-9975d5f86-vkrtw" in "kube-system" namespace has status "Ready":"False"
	I1007 12:47:35.197849 1605045 pod_ready.go:103] pod "metrics-server-9975d5f86-vkrtw" in "kube-system" namespace has status "Ready":"False"
	I1007 12:47:37.198061 1605045 pod_ready.go:103] pod "metrics-server-9975d5f86-vkrtw" in "kube-system" namespace has status "Ready":"False"
	I1007 12:47:39.198266 1605045 pod_ready.go:103] pod "metrics-server-9975d5f86-vkrtw" in "kube-system" namespace has status "Ready":"False"
	I1007 12:47:41.698409 1605045 pod_ready.go:103] pod "metrics-server-9975d5f86-vkrtw" in "kube-system" namespace has status "Ready":"False"
	I1007 12:47:43.698526 1605045 pod_ready.go:103] pod "metrics-server-9975d5f86-vkrtw" in "kube-system" namespace has status "Ready":"False"
	I1007 12:47:46.197607 1605045 pod_ready.go:103] pod "metrics-server-9975d5f86-vkrtw" in "kube-system" namespace has status "Ready":"False"
	I1007 12:47:48.698495 1605045 pod_ready.go:103] pod "metrics-server-9975d5f86-vkrtw" in "kube-system" namespace has status "Ready":"False"
	I1007 12:47:50.698560 1605045 pod_ready.go:103] pod "metrics-server-9975d5f86-vkrtw" in "kube-system" namespace has status "Ready":"False"
	I1007 12:47:52.698657 1605045 pod_ready.go:103] pod "metrics-server-9975d5f86-vkrtw" in "kube-system" namespace has status "Ready":"False"
	I1007 12:47:55.198421 1605045 pod_ready.go:103] pod "metrics-server-9975d5f86-vkrtw" in "kube-system" namespace has status "Ready":"False"
	I1007 12:47:57.198532 1605045 pod_ready.go:103] pod "metrics-server-9975d5f86-vkrtw" in "kube-system" namespace has status "Ready":"False"
	I1007 12:47:59.198676 1605045 pod_ready.go:103] pod "metrics-server-9975d5f86-vkrtw" in "kube-system" namespace has status "Ready":"False"
	I1007 12:48:01.699140 1605045 pod_ready.go:103] pod "metrics-server-9975d5f86-vkrtw" in "kube-system" namespace has status "Ready":"False"
	I1007 12:48:04.198451 1605045 pod_ready.go:103] pod "metrics-server-9975d5f86-vkrtw" in "kube-system" namespace has status "Ready":"False"
	I1007 12:48:06.698886 1605045 pod_ready.go:103] pod "metrics-server-9975d5f86-vkrtw" in "kube-system" namespace has status "Ready":"False"
	I1007 12:48:09.198184 1605045 pod_ready.go:103] pod "metrics-server-9975d5f86-vkrtw" in "kube-system" namespace has status "Ready":"False"
	I1007 12:48:11.698682 1605045 pod_ready.go:103] pod "metrics-server-9975d5f86-vkrtw" in "kube-system" namespace has status "Ready":"False"
	I1007 12:48:13.699153 1605045 pod_ready.go:103] pod "metrics-server-9975d5f86-vkrtw" in "kube-system" namespace has status "Ready":"False"
	I1007 12:48:16.245742 1605045 pod_ready.go:103] pod "metrics-server-9975d5f86-vkrtw" in "kube-system" namespace has status "Ready":"False"
	I1007 12:48:18.699289 1605045 pod_ready.go:103] pod "metrics-server-9975d5f86-vkrtw" in "kube-system" namespace has status "Ready":"False"
	I1007 12:48:21.198024 1605045 pod_ready.go:103] pod "metrics-server-9975d5f86-vkrtw" in "kube-system" namespace has status "Ready":"False"
	I1007 12:48:23.198132 1605045 pod_ready.go:103] pod "metrics-server-9975d5f86-vkrtw" in "kube-system" namespace has status "Ready":"False"
	I1007 12:48:25.701146 1605045 pod_ready.go:103] pod "metrics-server-9975d5f86-vkrtw" in "kube-system" namespace has status "Ready":"False"
	I1007 12:48:28.198056 1605045 pod_ready.go:103] pod "metrics-server-9975d5f86-vkrtw" in "kube-system" namespace has status "Ready":"False"
	I1007 12:48:30.198477 1605045 pod_ready.go:103] pod "metrics-server-9975d5f86-vkrtw" in "kube-system" namespace has status "Ready":"False"
	I1007 12:48:32.198858 1605045 pod_ready.go:103] pod "metrics-server-9975d5f86-vkrtw" in "kube-system" namespace has status "Ready":"False"
	I1007 12:48:33.698488 1605045 pod_ready.go:82] duration metric: took 4m0.006757182s for pod "metrics-server-9975d5f86-vkrtw" in "kube-system" namespace to be "Ready" ...
	E1007 12:48:33.698517 1605045 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I1007 12:48:33.698528 1605045 pod_ready.go:39] duration metric: took 5m19.765021471s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1007 12:48:33.698541 1605045 api_server.go:52] waiting for apiserver process to appear ...
	I1007 12:48:33.698570 1605045 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1007 12:48:33.698639 1605045 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1007 12:48:33.745225 1605045 cri.go:89] found id: "c560d5c1bf2a37b5133996a47b9fd87c6142a14ff27e284676c132938457dd1a"
	I1007 12:48:33.745249 1605045 cri.go:89] found id: "087491a883dc7a00d19cb0b6f1aa66d75a5fdf4285af40a17d54c064db2cfb38"
	I1007 12:48:33.745255 1605045 cri.go:89] found id: ""
	I1007 12:48:33.745262 1605045 logs.go:282] 2 containers: [c560d5c1bf2a37b5133996a47b9fd87c6142a14ff27e284676c132938457dd1a 087491a883dc7a00d19cb0b6f1aa66d75a5fdf4285af40a17d54c064db2cfb38]
	I1007 12:48:33.745321 1605045 ssh_runner.go:195] Run: which crictl
	I1007 12:48:33.748942 1605045 ssh_runner.go:195] Run: which crictl
	I1007 12:48:33.752463 1605045 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1007 12:48:33.752545 1605045 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1007 12:48:33.797525 1605045 cri.go:89] found id: "1e0947c0d1d23430fec3b7273c7a245208911a9fd18651ee2ea30452df93bfdc"
	I1007 12:48:33.797603 1605045 cri.go:89] found id: "1cda9d232b1a29681f59212c1398e49986b28471693e5b8ec3e1096d4b08664b"
	I1007 12:48:33.797624 1605045 cri.go:89] found id: ""
	I1007 12:48:33.797633 1605045 logs.go:282] 2 containers: [1e0947c0d1d23430fec3b7273c7a245208911a9fd18651ee2ea30452df93bfdc 1cda9d232b1a29681f59212c1398e49986b28471693e5b8ec3e1096d4b08664b]
	I1007 12:48:33.797702 1605045 ssh_runner.go:195] Run: which crictl
	I1007 12:48:33.801579 1605045 ssh_runner.go:195] Run: which crictl
	I1007 12:48:33.805144 1605045 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1007 12:48:33.805217 1605045 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1007 12:48:33.842578 1605045 cri.go:89] found id: "4bb2fa0793733df0e143059afecc82f6c14eff08bfc7bfdd197dd1f5722e7d55"
	I1007 12:48:33.842647 1605045 cri.go:89] found id: "8805bd4c8f5e006eb0f0abb45a40c3d0696ac379978061faa21852c4370a056a"
	I1007 12:48:33.842667 1605045 cri.go:89] found id: ""
	I1007 12:48:33.842687 1605045 logs.go:282] 2 containers: [4bb2fa0793733df0e143059afecc82f6c14eff08bfc7bfdd197dd1f5722e7d55 8805bd4c8f5e006eb0f0abb45a40c3d0696ac379978061faa21852c4370a056a]
	I1007 12:48:33.842825 1605045 ssh_runner.go:195] Run: which crictl
	I1007 12:48:33.846543 1605045 ssh_runner.go:195] Run: which crictl
	I1007 12:48:33.850168 1605045 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1007 12:48:33.850239 1605045 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1007 12:48:33.889762 1605045 cri.go:89] found id: "8f43fed348ba1cdb7ae28e497ad28f67977db09949a112d2d17870df4a117f0a"
	I1007 12:48:33.889784 1605045 cri.go:89] found id: "56445c334390ae25dcd546cbbe4c491fdc1651b3a5a4d8815c363c7a8bfbb990"
	I1007 12:48:33.889789 1605045 cri.go:89] found id: ""
	I1007 12:48:33.889796 1605045 logs.go:282] 2 containers: [8f43fed348ba1cdb7ae28e497ad28f67977db09949a112d2d17870df4a117f0a 56445c334390ae25dcd546cbbe4c491fdc1651b3a5a4d8815c363c7a8bfbb990]
	I1007 12:48:33.889854 1605045 ssh_runner.go:195] Run: which crictl
	I1007 12:48:33.893670 1605045 ssh_runner.go:195] Run: which crictl
	I1007 12:48:33.897461 1605045 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1007 12:48:33.897532 1605045 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1007 12:48:33.938164 1605045 cri.go:89] found id: "e0b54b6d912b9cac2cbd658159c366466e160b149d33915f3e96ca1d509f139a"
	I1007 12:48:33.938245 1605045 cri.go:89] found id: "1d7b57cafcedf477410702249e618a5b428f449129dfe864c4e92565b31aa7da"
	I1007 12:48:33.938258 1605045 cri.go:89] found id: ""
	I1007 12:48:33.938266 1605045 logs.go:282] 2 containers: [e0b54b6d912b9cac2cbd658159c366466e160b149d33915f3e96ca1d509f139a 1d7b57cafcedf477410702249e618a5b428f449129dfe864c4e92565b31aa7da]
	I1007 12:48:33.938330 1605045 ssh_runner.go:195] Run: which crictl
	I1007 12:48:33.942472 1605045 ssh_runner.go:195] Run: which crictl
	I1007 12:48:33.946703 1605045 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1007 12:48:33.946798 1605045 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1007 12:48:33.990267 1605045 cri.go:89] found id: "07a060d66ad9e5daa5e626e4bb7e26d30ff2ed7a40ea150c1cd31786b35e58df"
	I1007 12:48:33.990292 1605045 cri.go:89] found id: "1242dd7b1b01beccce95d3398556baedca1c8a45e6c29f05bf39e03c50f5f0ca"
	I1007 12:48:33.990300 1605045 cri.go:89] found id: ""
	I1007 12:48:33.990308 1605045 logs.go:282] 2 containers: [07a060d66ad9e5daa5e626e4bb7e26d30ff2ed7a40ea150c1cd31786b35e58df 1242dd7b1b01beccce95d3398556baedca1c8a45e6c29f05bf39e03c50f5f0ca]
	I1007 12:48:33.990370 1605045 ssh_runner.go:195] Run: which crictl
	I1007 12:48:33.994131 1605045 ssh_runner.go:195] Run: which crictl
	I1007 12:48:33.997636 1605045 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1007 12:48:33.997710 1605045 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1007 12:48:34.039420 1605045 cri.go:89] found id: "8cf6219e2513777cac33b5c910a4793a59ef8cc60b43f7ab0383fcb48b0249b4"
	I1007 12:48:34.039443 1605045 cri.go:89] found id: "9e5fd41f9e9f5bb083bcc0a91b11894d2f6d83235dbe7b915428c7f7f5fe0fa1"
	I1007 12:48:34.039448 1605045 cri.go:89] found id: ""
	I1007 12:48:34.039455 1605045 logs.go:282] 2 containers: [8cf6219e2513777cac33b5c910a4793a59ef8cc60b43f7ab0383fcb48b0249b4 9e5fd41f9e9f5bb083bcc0a91b11894d2f6d83235dbe7b915428c7f7f5fe0fa1]
	I1007 12:48:34.039583 1605045 ssh_runner.go:195] Run: which crictl
	I1007 12:48:34.043365 1605045 ssh_runner.go:195] Run: which crictl
	I1007 12:48:34.047381 1605045 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1007 12:48:34.047483 1605045 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1007 12:48:34.093112 1605045 cri.go:89] found id: "a0fd0c4da3d885bfd3e984d4118d66727c79c42fb58e4d7d63b72a2a3ad14444"
	I1007 12:48:34.093187 1605045 cri.go:89] found id: "4e279b29e878213ed1a543c131a7d9f90f5b59daeb83257e0bf1c6513ff53468"
	I1007 12:48:34.093208 1605045 cri.go:89] found id: ""
	I1007 12:48:34.093236 1605045 logs.go:282] 2 containers: [a0fd0c4da3d885bfd3e984d4118d66727c79c42fb58e4d7d63b72a2a3ad14444 4e279b29e878213ed1a543c131a7d9f90f5b59daeb83257e0bf1c6513ff53468]
	I1007 12:48:34.093313 1605045 ssh_runner.go:195] Run: which crictl
	I1007 12:48:34.099043 1605045 ssh_runner.go:195] Run: which crictl
	I1007 12:48:34.103144 1605045 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1007 12:48:34.103227 1605045 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1007 12:48:34.144123 1605045 cri.go:89] found id: "ee2b00422af4d052a720f617e5504040c0d7e9f2bff2f350b1c71f90cd16a1b0"
	I1007 12:48:34.144147 1605045 cri.go:89] found id: ""
	I1007 12:48:34.144154 1605045 logs.go:282] 1 containers: [ee2b00422af4d052a720f617e5504040c0d7e9f2bff2f350b1c71f90cd16a1b0]
	I1007 12:48:34.144211 1605045 ssh_runner.go:195] Run: which crictl
	I1007 12:48:34.148181 1605045 logs.go:123] Gathering logs for kubelet ...
	I1007 12:48:34.148221 1605045 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1007 12:48:34.200768 1605045 logs.go:138] Found kubelet problem: Oct 07 12:43:13 old-k8s-version-130031 kubelet[658]: E1007 12:43:13.872520     658 reflector.go:138] object-"kube-system"/"storage-provisioner-token-ktk5h": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "storage-provisioner-token-ktk5h" is forbidden: User "system:node:old-k8s-version-130031" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-130031' and this object
	W1007 12:48:34.201084 1605045 logs.go:138] Found kubelet problem: Oct 07 12:43:13 old-k8s-version-130031 kubelet[658]: E1007 12:43:13.891855     658 reflector.go:138] object-"kube-system"/"kindnet-token-srnxf": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kindnet-token-srnxf" is forbidden: User "system:node:old-k8s-version-130031" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-130031' and this object
	W1007 12:48:34.201296 1605045 logs.go:138] Found kubelet problem: Oct 07 12:43:13 old-k8s-version-130031 kubelet[658]: E1007 12:43:13.891938     658 reflector.go:138] object-"kube-system"/"coredns-token-627jj": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "coredns-token-627jj" is forbidden: User "system:node:old-k8s-version-130031" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-130031' and this object
	W1007 12:48:34.201511 1605045 logs.go:138] Found kubelet problem: Oct 07 12:43:13 old-k8s-version-130031 kubelet[658]: E1007 12:43:13.893244     658 reflector.go:138] object-"kube-system"/"kube-proxy-token-khb44": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kube-proxy-token-khb44" is forbidden: User "system:node:old-k8s-version-130031" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-130031' and this object
	W1007 12:48:34.201722 1605045 logs.go:138] Found kubelet problem: Oct 07 12:43:13 old-k8s-version-130031 kubelet[658]: E1007 12:43:13.893311     658 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:old-k8s-version-130031" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-130031' and this object
	W1007 12:48:34.201944 1605045 logs.go:138] Found kubelet problem: Oct 07 12:43:13 old-k8s-version-130031 kubelet[658]: E1007 12:43:13.893344     658 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:old-k8s-version-130031" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-130031' and this object
	W1007 12:48:34.202154 1605045 logs.go:138] Found kubelet problem: Oct 07 12:43:13 old-k8s-version-130031 kubelet[658]: E1007 12:43:13.952228     658 reflector.go:138] object-"default"/"default-token-nq6kr": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "default-token-nq6kr" is forbidden: User "system:node:old-k8s-version-130031" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'old-k8s-version-130031' and this object
	W1007 12:48:34.202374 1605045 logs.go:138] Found kubelet problem: Oct 07 12:43:13 old-k8s-version-130031 kubelet[658]: E1007 12:43:13.952337     658 reflector.go:138] object-"kube-system"/"metrics-server-token-t2mrc": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "metrics-server-token-t2mrc" is forbidden: User "system:node:old-k8s-version-130031" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-130031' and this object
	W1007 12:48:34.212449 1605045 logs.go:138] Found kubelet problem: Oct 07 12:43:15 old-k8s-version-130031 kubelet[658]: E1007 12:43:15.871648     658 pod_workers.go:191] Error syncing pod a3e21a3e-2a59-405e-9af7-0f2a4a27ed62 ("metrics-server-9975d5f86-vkrtw_kube-system(a3e21a3e-2a59-405e-9af7-0f2a4a27ed62)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W1007 12:48:34.212932 1605045 logs.go:138] Found kubelet problem: Oct 07 12:43:16 old-k8s-version-130031 kubelet[658]: E1007 12:43:16.563851     658 pod_workers.go:191] Error syncing pod a3e21a3e-2a59-405e-9af7-0f2a4a27ed62 ("metrics-server-9975d5f86-vkrtw_kube-system(a3e21a3e-2a59-405e-9af7-0f2a4a27ed62)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1007 12:48:34.216459 1605045 logs.go:138] Found kubelet problem: Oct 07 12:43:30 old-k8s-version-130031 kubelet[658]: E1007 12:43:30.401060     658 pod_workers.go:191] Error syncing pod a3e21a3e-2a59-405e-9af7-0f2a4a27ed62 ("metrics-server-9975d5f86-vkrtw_kube-system(a3e21a3e-2a59-405e-9af7-0f2a4a27ed62)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W1007 12:48:34.218593 1605045 logs.go:138] Found kubelet problem: Oct 07 12:43:44 old-k8s-version-130031 kubelet[658]: E1007 12:43:44.377295     658 pod_workers.go:191] Error syncing pod a3e21a3e-2a59-405e-9af7-0f2a4a27ed62 ("metrics-server-9975d5f86-vkrtw_kube-system(a3e21a3e-2a59-405e-9af7-0f2a4a27ed62)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1007 12:48:34.219048 1605045 logs.go:138] Found kubelet problem: Oct 07 12:43:44 old-k8s-version-130031 kubelet[658]: E1007 12:43:44.733029     658 pod_workers.go:191] Error syncing pod 65909a33-e70d-4615-8ba7-4d6c9d9f6dd7 ("dashboard-metrics-scraper-8d5bb5db8-b54bs_kubernetes-dashboard(65909a33-e70d-4615-8ba7-4d6c9d9f6dd7)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-b54bs_kubernetes-dashboard(65909a33-e70d-4615-8ba7-4d6c9d9f6dd7)"
	W1007 12:48:34.219377 1605045 logs.go:138] Found kubelet problem: Oct 07 12:43:45 old-k8s-version-130031 kubelet[658]: E1007 12:43:45.736335     658 pod_workers.go:191] Error syncing pod 65909a33-e70d-4615-8ba7-4d6c9d9f6dd7 ("dashboard-metrics-scraper-8d5bb5db8-b54bs_kubernetes-dashboard(65909a33-e70d-4615-8ba7-4d6c9d9f6dd7)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-b54bs_kubernetes-dashboard(65909a33-e70d-4615-8ba7-4d6c9d9f6dd7)"
	W1007 12:48:34.219828 1605045 logs.go:138] Found kubelet problem: Oct 07 12:43:47 old-k8s-version-130031 kubelet[658]: E1007 12:43:47.745194     658 pod_workers.go:191] Error syncing pod 2562f693-2c1c-4966-9978-9712666b4812 ("storage-provisioner_kube-system(2562f693-2c1c-4966-9978-9712666b4812)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(2562f693-2c1c-4966-9978-9712666b4812)"
	W1007 12:48:34.220493 1605045 logs.go:138] Found kubelet problem: Oct 07 12:43:53 old-k8s-version-130031 kubelet[658]: E1007 12:43:53.692850     658 pod_workers.go:191] Error syncing pod 65909a33-e70d-4615-8ba7-4d6c9d9f6dd7 ("dashboard-metrics-scraper-8d5bb5db8-b54bs_kubernetes-dashboard(65909a33-e70d-4615-8ba7-4d6c9d9f6dd7)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-b54bs_kubernetes-dashboard(65909a33-e70d-4615-8ba7-4d6c9d9f6dd7)"
	W1007 12:48:34.223067 1605045 logs.go:138] Found kubelet problem: Oct 07 12:43:59 old-k8s-version-130031 kubelet[658]: E1007 12:43:59.383291     658 pod_workers.go:191] Error syncing pod a3e21a3e-2a59-405e-9af7-0f2a4a27ed62 ("metrics-server-9975d5f86-vkrtw_kube-system(a3e21a3e-2a59-405e-9af7-0f2a4a27ed62)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W1007 12:48:34.223690 1605045 logs.go:138] Found kubelet problem: Oct 07 12:44:06 old-k8s-version-130031 kubelet[658]: E1007 12:44:06.798826     658 pod_workers.go:191] Error syncing pod 65909a33-e70d-4615-8ba7-4d6c9d9f6dd7 ("dashboard-metrics-scraper-8d5bb5db8-b54bs_kubernetes-dashboard(65909a33-e70d-4615-8ba7-4d6c9d9f6dd7)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-b54bs_kubernetes-dashboard(65909a33-e70d-4615-8ba7-4d6c9d9f6dd7)"
	W1007 12:48:34.223879 1605045 logs.go:138] Found kubelet problem: Oct 07 12:44:10 old-k8s-version-130031 kubelet[658]: E1007 12:44:10.374328     658 pod_workers.go:191] Error syncing pod a3e21a3e-2a59-405e-9af7-0f2a4a27ed62 ("metrics-server-9975d5f86-vkrtw_kube-system(a3e21a3e-2a59-405e-9af7-0f2a4a27ed62)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1007 12:48:34.224209 1605045 logs.go:138] Found kubelet problem: Oct 07 12:44:13 old-k8s-version-130031 kubelet[658]: E1007 12:44:13.693437     658 pod_workers.go:191] Error syncing pod 65909a33-e70d-4615-8ba7-4d6c9d9f6dd7 ("dashboard-metrics-scraper-8d5bb5db8-b54bs_kubernetes-dashboard(65909a33-e70d-4615-8ba7-4d6c9d9f6dd7)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-b54bs_kubernetes-dashboard(65909a33-e70d-4615-8ba7-4d6c9d9f6dd7)"
	W1007 12:48:34.224395 1605045 logs.go:138] Found kubelet problem: Oct 07 12:44:21 old-k8s-version-130031 kubelet[658]: E1007 12:44:21.373629     658 pod_workers.go:191] Error syncing pod a3e21a3e-2a59-405e-9af7-0f2a4a27ed62 ("metrics-server-9975d5f86-vkrtw_kube-system(a3e21a3e-2a59-405e-9af7-0f2a4a27ed62)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1007 12:48:34.224727 1605045 logs.go:138] Found kubelet problem: Oct 07 12:44:26 old-k8s-version-130031 kubelet[658]: E1007 12:44:26.373370     658 pod_workers.go:191] Error syncing pod 65909a33-e70d-4615-8ba7-4d6c9d9f6dd7 ("dashboard-metrics-scraper-8d5bb5db8-b54bs_kubernetes-dashboard(65909a33-e70d-4615-8ba7-4d6c9d9f6dd7)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-b54bs_kubernetes-dashboard(65909a33-e70d-4615-8ba7-4d6c9d9f6dd7)"
	W1007 12:48:34.224912 1605045 logs.go:138] Found kubelet problem: Oct 07 12:44:35 old-k8s-version-130031 kubelet[658]: E1007 12:44:35.373708     658 pod_workers.go:191] Error syncing pod a3e21a3e-2a59-405e-9af7-0f2a4a27ed62 ("metrics-server-9975d5f86-vkrtw_kube-system(a3e21a3e-2a59-405e-9af7-0f2a4a27ed62)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1007 12:48:34.225493 1605045 logs.go:138] Found kubelet problem: Oct 07 12:44:40 old-k8s-version-130031 kubelet[658]: E1007 12:44:40.884522     658 pod_workers.go:191] Error syncing pod 65909a33-e70d-4615-8ba7-4d6c9d9f6dd7 ("dashboard-metrics-scraper-8d5bb5db8-b54bs_kubernetes-dashboard(65909a33-e70d-4615-8ba7-4d6c9d9f6dd7)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-b54bs_kubernetes-dashboard(65909a33-e70d-4615-8ba7-4d6c9d9f6dd7)"
	W1007 12:48:34.225817 1605045 logs.go:138] Found kubelet problem: Oct 07 12:44:43 old-k8s-version-130031 kubelet[658]: E1007 12:44:43.693924     658 pod_workers.go:191] Error syncing pod 65909a33-e70d-4615-8ba7-4d6c9d9f6dd7 ("dashboard-metrics-scraper-8d5bb5db8-b54bs_kubernetes-dashboard(65909a33-e70d-4615-8ba7-4d6c9d9f6dd7)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-b54bs_kubernetes-dashboard(65909a33-e70d-4615-8ba7-4d6c9d9f6dd7)"
	W1007 12:48:34.228241 1605045 logs.go:138] Found kubelet problem: Oct 07 12:44:46 old-k8s-version-130031 kubelet[658]: E1007 12:44:46.384700     658 pod_workers.go:191] Error syncing pod a3e21a3e-2a59-405e-9af7-0f2a4a27ed62 ("metrics-server-9975d5f86-vkrtw_kube-system(a3e21a3e-2a59-405e-9af7-0f2a4a27ed62)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W1007 12:48:34.228566 1605045 logs.go:138] Found kubelet problem: Oct 07 12:44:57 old-k8s-version-130031 kubelet[658]: E1007 12:44:57.373721     658 pod_workers.go:191] Error syncing pod 65909a33-e70d-4615-8ba7-4d6c9d9f6dd7 ("dashboard-metrics-scraper-8d5bb5db8-b54bs_kubernetes-dashboard(65909a33-e70d-4615-8ba7-4d6c9d9f6dd7)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-b54bs_kubernetes-dashboard(65909a33-e70d-4615-8ba7-4d6c9d9f6dd7)"
	W1007 12:48:34.228750 1605045 logs.go:138] Found kubelet problem: Oct 07 12:44:59 old-k8s-version-130031 kubelet[658]: E1007 12:44:59.375281     658 pod_workers.go:191] Error syncing pod a3e21a3e-2a59-405e-9af7-0f2a4a27ed62 ("metrics-server-9975d5f86-vkrtw_kube-system(a3e21a3e-2a59-405e-9af7-0f2a4a27ed62)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1007 12:48:34.229079 1605045 logs.go:138] Found kubelet problem: Oct 07 12:45:09 old-k8s-version-130031 kubelet[658]: E1007 12:45:09.373255     658 pod_workers.go:191] Error syncing pod 65909a33-e70d-4615-8ba7-4d6c9d9f6dd7 ("dashboard-metrics-scraper-8d5bb5db8-b54bs_kubernetes-dashboard(65909a33-e70d-4615-8ba7-4d6c9d9f6dd7)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-b54bs_kubernetes-dashboard(65909a33-e70d-4615-8ba7-4d6c9d9f6dd7)"
	W1007 12:48:34.229265 1605045 logs.go:138] Found kubelet problem: Oct 07 12:45:11 old-k8s-version-130031 kubelet[658]: E1007 12:45:11.373876     658 pod_workers.go:191] Error syncing pod a3e21a3e-2a59-405e-9af7-0f2a4a27ed62 ("metrics-server-9975d5f86-vkrtw_kube-system(a3e21a3e-2a59-405e-9af7-0f2a4a27ed62)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1007 12:48:34.229846 1605045 logs.go:138] Found kubelet problem: Oct 07 12:45:22 old-k8s-version-130031 kubelet[658]: E1007 12:45:21.996110     658 pod_workers.go:191] Error syncing pod 65909a33-e70d-4615-8ba7-4d6c9d9f6dd7 ("dashboard-metrics-scraper-8d5bb5db8-b54bs_kubernetes-dashboard(65909a33-e70d-4615-8ba7-4d6c9d9f6dd7)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-b54bs_kubernetes-dashboard(65909a33-e70d-4615-8ba7-4d6c9d9f6dd7)"
	W1007 12:48:34.230218 1605045 logs.go:138] Found kubelet problem: Oct 07 12:45:23 old-k8s-version-130031 kubelet[658]: E1007 12:45:23.693664     658 pod_workers.go:191] Error syncing pod 65909a33-e70d-4615-8ba7-4d6c9d9f6dd7 ("dashboard-metrics-scraper-8d5bb5db8-b54bs_kubernetes-dashboard(65909a33-e70d-4615-8ba7-4d6c9d9f6dd7)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-b54bs_kubernetes-dashboard(65909a33-e70d-4615-8ba7-4d6c9d9f6dd7)"
	W1007 12:48:34.230407 1605045 logs.go:138] Found kubelet problem: Oct 07 12:45:26 old-k8s-version-130031 kubelet[658]: E1007 12:45:26.374341     658 pod_workers.go:191] Error syncing pod a3e21a3e-2a59-405e-9af7-0f2a4a27ed62 ("metrics-server-9975d5f86-vkrtw_kube-system(a3e21a3e-2a59-405e-9af7-0f2a4a27ed62)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1007 12:48:34.230735 1605045 logs.go:138] Found kubelet problem: Oct 07 12:45:38 old-k8s-version-130031 kubelet[658]: E1007 12:45:38.374842     658 pod_workers.go:191] Error syncing pod 65909a33-e70d-4615-8ba7-4d6c9d9f6dd7 ("dashboard-metrics-scraper-8d5bb5db8-b54bs_kubernetes-dashboard(65909a33-e70d-4615-8ba7-4d6c9d9f6dd7)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-b54bs_kubernetes-dashboard(65909a33-e70d-4615-8ba7-4d6c9d9f6dd7)"
	W1007 12:48:34.230922 1605045 logs.go:138] Found kubelet problem: Oct 07 12:45:38 old-k8s-version-130031 kubelet[658]: E1007 12:45:38.377418     658 pod_workers.go:191] Error syncing pod a3e21a3e-2a59-405e-9af7-0f2a4a27ed62 ("metrics-server-9975d5f86-vkrtw_kube-system(a3e21a3e-2a59-405e-9af7-0f2a4a27ed62)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1007 12:48:34.231245 1605045 logs.go:138] Found kubelet problem: Oct 07 12:45:50 old-k8s-version-130031 kubelet[658]: E1007 12:45:50.375115     658 pod_workers.go:191] Error syncing pod 65909a33-e70d-4615-8ba7-4d6c9d9f6dd7 ("dashboard-metrics-scraper-8d5bb5db8-b54bs_kubernetes-dashboard(65909a33-e70d-4615-8ba7-4d6c9d9f6dd7)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-b54bs_kubernetes-dashboard(65909a33-e70d-4615-8ba7-4d6c9d9f6dd7)"
	W1007 12:48:34.231431 1605045 logs.go:138] Found kubelet problem: Oct 07 12:45:50 old-k8s-version-130031 kubelet[658]: E1007 12:45:50.378298     658 pod_workers.go:191] Error syncing pod a3e21a3e-2a59-405e-9af7-0f2a4a27ed62 ("metrics-server-9975d5f86-vkrtw_kube-system(a3e21a3e-2a59-405e-9af7-0f2a4a27ed62)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1007 12:48:34.231626 1605045 logs.go:138] Found kubelet problem: Oct 07 12:46:01 old-k8s-version-130031 kubelet[658]: E1007 12:46:01.373996     658 pod_workers.go:191] Error syncing pod a3e21a3e-2a59-405e-9af7-0f2a4a27ed62 ("metrics-server-9975d5f86-vkrtw_kube-system(a3e21a3e-2a59-405e-9af7-0f2a4a27ed62)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1007 12:48:34.231965 1605045 logs.go:138] Found kubelet problem: Oct 07 12:46:03 old-k8s-version-130031 kubelet[658]: E1007 12:46:03.373238     658 pod_workers.go:191] Error syncing pod 65909a33-e70d-4615-8ba7-4d6c9d9f6dd7 ("dashboard-metrics-scraper-8d5bb5db8-b54bs_kubernetes-dashboard(65909a33-e70d-4615-8ba7-4d6c9d9f6dd7)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-b54bs_kubernetes-dashboard(65909a33-e70d-4615-8ba7-4d6c9d9f6dd7)"
	W1007 12:48:34.234414 1605045 logs.go:138] Found kubelet problem: Oct 07 12:46:14 old-k8s-version-130031 kubelet[658]: E1007 12:46:14.383215     658 pod_workers.go:191] Error syncing pod a3e21a3e-2a59-405e-9af7-0f2a4a27ed62 ("metrics-server-9975d5f86-vkrtw_kube-system(a3e21a3e-2a59-405e-9af7-0f2a4a27ed62)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W1007 12:48:34.234741 1605045 logs.go:138] Found kubelet problem: Oct 07 12:46:16 old-k8s-version-130031 kubelet[658]: E1007 12:46:16.373283     658 pod_workers.go:191] Error syncing pod 65909a33-e70d-4615-8ba7-4d6c9d9f6dd7 ("dashboard-metrics-scraper-8d5bb5db8-b54bs_kubernetes-dashboard(65909a33-e70d-4615-8ba7-4d6c9d9f6dd7)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-b54bs_kubernetes-dashboard(65909a33-e70d-4615-8ba7-4d6c9d9f6dd7)"
	W1007 12:48:34.234926 1605045 logs.go:138] Found kubelet problem: Oct 07 12:46:25 old-k8s-version-130031 kubelet[658]: E1007 12:46:25.373793     658 pod_workers.go:191] Error syncing pod a3e21a3e-2a59-405e-9af7-0f2a4a27ed62 ("metrics-server-9975d5f86-vkrtw_kube-system(a3e21a3e-2a59-405e-9af7-0f2a4a27ed62)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1007 12:48:34.235250 1605045 logs.go:138] Found kubelet problem: Oct 07 12:46:28 old-k8s-version-130031 kubelet[658]: E1007 12:46:28.373827     658 pod_workers.go:191] Error syncing pod 65909a33-e70d-4615-8ba7-4d6c9d9f6dd7 ("dashboard-metrics-scraper-8d5bb5db8-b54bs_kubernetes-dashboard(65909a33-e70d-4615-8ba7-4d6c9d9f6dd7)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-b54bs_kubernetes-dashboard(65909a33-e70d-4615-8ba7-4d6c9d9f6dd7)"
	W1007 12:48:34.235434 1605045 logs.go:138] Found kubelet problem: Oct 07 12:46:39 old-k8s-version-130031 kubelet[658]: E1007 12:46:39.373930     658 pod_workers.go:191] Error syncing pod a3e21a3e-2a59-405e-9af7-0f2a4a27ed62 ("metrics-server-9975d5f86-vkrtw_kube-system(a3e21a3e-2a59-405e-9af7-0f2a4a27ed62)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1007 12:48:34.236040 1605045 logs.go:138] Found kubelet problem: Oct 07 12:46:44 old-k8s-version-130031 kubelet[658]: E1007 12:46:44.239654     658 pod_workers.go:191] Error syncing pod 65909a33-e70d-4615-8ba7-4d6c9d9f6dd7 ("dashboard-metrics-scraper-8d5bb5db8-b54bs_kubernetes-dashboard(65909a33-e70d-4615-8ba7-4d6c9d9f6dd7)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-b54bs_kubernetes-dashboard(65909a33-e70d-4615-8ba7-4d6c9d9f6dd7)"
	W1007 12:48:34.236227 1605045 logs.go:138] Found kubelet problem: Oct 07 12:46:50 old-k8s-version-130031 kubelet[658]: E1007 12:46:50.374092     658 pod_workers.go:191] Error syncing pod a3e21a3e-2a59-405e-9af7-0f2a4a27ed62 ("metrics-server-9975d5f86-vkrtw_kube-system(a3e21a3e-2a59-405e-9af7-0f2a4a27ed62)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1007 12:48:34.236557 1605045 logs.go:138] Found kubelet problem: Oct 07 12:46:53 old-k8s-version-130031 kubelet[658]: E1007 12:46:53.693317     658 pod_workers.go:191] Error syncing pod 65909a33-e70d-4615-8ba7-4d6c9d9f6dd7 ("dashboard-metrics-scraper-8d5bb5db8-b54bs_kubernetes-dashboard(65909a33-e70d-4615-8ba7-4d6c9d9f6dd7)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-b54bs_kubernetes-dashboard(65909a33-e70d-4615-8ba7-4d6c9d9f6dd7)"
	W1007 12:48:34.236742 1605045 logs.go:138] Found kubelet problem: Oct 07 12:47:01 old-k8s-version-130031 kubelet[658]: E1007 12:47:01.373998     658 pod_workers.go:191] Error syncing pod a3e21a3e-2a59-405e-9af7-0f2a4a27ed62 ("metrics-server-9975d5f86-vkrtw_kube-system(a3e21a3e-2a59-405e-9af7-0f2a4a27ed62)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1007 12:48:34.237067 1605045 logs.go:138] Found kubelet problem: Oct 07 12:47:04 old-k8s-version-130031 kubelet[658]: E1007 12:47:04.374129     658 pod_workers.go:191] Error syncing pod 65909a33-e70d-4615-8ba7-4d6c9d9f6dd7 ("dashboard-metrics-scraper-8d5bb5db8-b54bs_kubernetes-dashboard(65909a33-e70d-4615-8ba7-4d6c9d9f6dd7)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-b54bs_kubernetes-dashboard(65909a33-e70d-4615-8ba7-4d6c9d9f6dd7)"
	W1007 12:48:34.237252 1605045 logs.go:138] Found kubelet problem: Oct 07 12:47:14 old-k8s-version-130031 kubelet[658]: E1007 12:47:14.373917     658 pod_workers.go:191] Error syncing pod a3e21a3e-2a59-405e-9af7-0f2a4a27ed62 ("metrics-server-9975d5f86-vkrtw_kube-system(a3e21a3e-2a59-405e-9af7-0f2a4a27ed62)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1007 12:48:34.237578 1605045 logs.go:138] Found kubelet problem: Oct 07 12:47:17 old-k8s-version-130031 kubelet[658]: E1007 12:47:17.373207     658 pod_workers.go:191] Error syncing pod 65909a33-e70d-4615-8ba7-4d6c9d9f6dd7 ("dashboard-metrics-scraper-8d5bb5db8-b54bs_kubernetes-dashboard(65909a33-e70d-4615-8ba7-4d6c9d9f6dd7)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-b54bs_kubernetes-dashboard(65909a33-e70d-4615-8ba7-4d6c9d9f6dd7)"
	W1007 12:48:34.237763 1605045 logs.go:138] Found kubelet problem: Oct 07 12:47:28 old-k8s-version-130031 kubelet[658]: E1007 12:47:28.373860     658 pod_workers.go:191] Error syncing pod a3e21a3e-2a59-405e-9af7-0f2a4a27ed62 ("metrics-server-9975d5f86-vkrtw_kube-system(a3e21a3e-2a59-405e-9af7-0f2a4a27ed62)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1007 12:48:34.238090 1605045 logs.go:138] Found kubelet problem: Oct 07 12:47:29 old-k8s-version-130031 kubelet[658]: E1007 12:47:29.373269     658 pod_workers.go:191] Error syncing pod 65909a33-e70d-4615-8ba7-4d6c9d9f6dd7 ("dashboard-metrics-scraper-8d5bb5db8-b54bs_kubernetes-dashboard(65909a33-e70d-4615-8ba7-4d6c9d9f6dd7)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-b54bs_kubernetes-dashboard(65909a33-e70d-4615-8ba7-4d6c9d9f6dd7)"
	W1007 12:48:34.238417 1605045 logs.go:138] Found kubelet problem: Oct 07 12:47:41 old-k8s-version-130031 kubelet[658]: E1007 12:47:41.373813     658 pod_workers.go:191] Error syncing pod 65909a33-e70d-4615-8ba7-4d6c9d9f6dd7 ("dashboard-metrics-scraper-8d5bb5db8-b54bs_kubernetes-dashboard(65909a33-e70d-4615-8ba7-4d6c9d9f6dd7)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-b54bs_kubernetes-dashboard(65909a33-e70d-4615-8ba7-4d6c9d9f6dd7)"
	W1007 12:48:34.238602 1605045 logs.go:138] Found kubelet problem: Oct 07 12:47:41 old-k8s-version-130031 kubelet[658]: E1007 12:47:41.373880     658 pod_workers.go:191] Error syncing pod a3e21a3e-2a59-405e-9af7-0f2a4a27ed62 ("metrics-server-9975d5f86-vkrtw_kube-system(a3e21a3e-2a59-405e-9af7-0f2a4a27ed62)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1007 12:48:34.238927 1605045 logs.go:138] Found kubelet problem: Oct 07 12:47:54 old-k8s-version-130031 kubelet[658]: E1007 12:47:54.374386     658 pod_workers.go:191] Error syncing pod 65909a33-e70d-4615-8ba7-4d6c9d9f6dd7 ("dashboard-metrics-scraper-8d5bb5db8-b54bs_kubernetes-dashboard(65909a33-e70d-4615-8ba7-4d6c9d9f6dd7)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-b54bs_kubernetes-dashboard(65909a33-e70d-4615-8ba7-4d6c9d9f6dd7)"
	W1007 12:48:34.239112 1605045 logs.go:138] Found kubelet problem: Oct 07 12:47:56 old-k8s-version-130031 kubelet[658]: E1007 12:47:56.373681     658 pod_workers.go:191] Error syncing pod a3e21a3e-2a59-405e-9af7-0f2a4a27ed62 ("metrics-server-9975d5f86-vkrtw_kube-system(a3e21a3e-2a59-405e-9af7-0f2a4a27ed62)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1007 12:48:34.239436 1605045 logs.go:138] Found kubelet problem: Oct 07 12:48:06 old-k8s-version-130031 kubelet[658]: E1007 12:48:06.373474     658 pod_workers.go:191] Error syncing pod 65909a33-e70d-4615-8ba7-4d6c9d9f6dd7 ("dashboard-metrics-scraper-8d5bb5db8-b54bs_kubernetes-dashboard(65909a33-e70d-4615-8ba7-4d6c9d9f6dd7)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-b54bs_kubernetes-dashboard(65909a33-e70d-4615-8ba7-4d6c9d9f6dd7)"
	W1007 12:48:34.239643 1605045 logs.go:138] Found kubelet problem: Oct 07 12:48:10 old-k8s-version-130031 kubelet[658]: E1007 12:48:10.374205     658 pod_workers.go:191] Error syncing pod a3e21a3e-2a59-405e-9af7-0f2a4a27ed62 ("metrics-server-9975d5f86-vkrtw_kube-system(a3e21a3e-2a59-405e-9af7-0f2a4a27ed62)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1007 12:48:34.239974 1605045 logs.go:138] Found kubelet problem: Oct 07 12:48:19 old-k8s-version-130031 kubelet[658]: E1007 12:48:19.373191     658 pod_workers.go:191] Error syncing pod 65909a33-e70d-4615-8ba7-4d6c9d9f6dd7 ("dashboard-metrics-scraper-8d5bb5db8-b54bs_kubernetes-dashboard(65909a33-e70d-4615-8ba7-4d6c9d9f6dd7)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-b54bs_kubernetes-dashboard(65909a33-e70d-4615-8ba7-4d6c9d9f6dd7)"
	W1007 12:48:34.240160 1605045 logs.go:138] Found kubelet problem: Oct 07 12:48:22 old-k8s-version-130031 kubelet[658]: E1007 12:48:22.377528     658 pod_workers.go:191] Error syncing pod a3e21a3e-2a59-405e-9af7-0f2a4a27ed62 ("metrics-server-9975d5f86-vkrtw_kube-system(a3e21a3e-2a59-405e-9af7-0f2a4a27ed62)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1007 12:48:34.240488 1605045 logs.go:138] Found kubelet problem: Oct 07 12:48:33 old-k8s-version-130031 kubelet[658]: E1007 12:48:33.373332     658 pod_workers.go:191] Error syncing pod 65909a33-e70d-4615-8ba7-4d6c9d9f6dd7 ("dashboard-metrics-scraper-8d5bb5db8-b54bs_kubernetes-dashboard(65909a33-e70d-4615-8ba7-4d6c9d9f6dd7)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-b54bs_kubernetes-dashboard(65909a33-e70d-4615-8ba7-4d6c9d9f6dd7)"
	I1007 12:48:34.240499 1605045 logs.go:123] Gathering logs for describe nodes ...
	I1007 12:48:34.240513 1605045 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1007 12:48:34.445314 1605045 logs.go:123] Gathering logs for kube-scheduler [8f43fed348ba1cdb7ae28e497ad28f67977db09949a112d2d17870df4a117f0a] ...
	I1007 12:48:34.445346 1605045 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8f43fed348ba1cdb7ae28e497ad28f67977db09949a112d2d17870df4a117f0a"
	I1007 12:48:34.494776 1605045 logs.go:123] Gathering logs for kube-scheduler [56445c334390ae25dcd546cbbe4c491fdc1651b3a5a4d8815c363c7a8bfbb990] ...
	I1007 12:48:34.494805 1605045 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 56445c334390ae25dcd546cbbe4c491fdc1651b3a5a4d8815c363c7a8bfbb990"
	I1007 12:48:34.547430 1605045 logs.go:123] Gathering logs for kindnet [8cf6219e2513777cac33b5c910a4793a59ef8cc60b43f7ab0383fcb48b0249b4] ...
	I1007 12:48:34.547462 1605045 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8cf6219e2513777cac33b5c910a4793a59ef8cc60b43f7ab0383fcb48b0249b4"
	I1007 12:48:34.604841 1605045 logs.go:123] Gathering logs for containerd ...
	I1007 12:48:34.604873 1605045 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1007 12:48:34.668346 1605045 logs.go:123] Gathering logs for container status ...
	I1007 12:48:34.668385 1605045 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 12:48:34.719137 1605045 logs.go:123] Gathering logs for dmesg ...
	I1007 12:48:34.719168 1605045 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 12:48:34.737624 1605045 logs.go:123] Gathering logs for etcd [1e0947c0d1d23430fec3b7273c7a245208911a9fd18651ee2ea30452df93bfdc] ...
	I1007 12:48:34.737656 1605045 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1e0947c0d1d23430fec3b7273c7a245208911a9fd18651ee2ea30452df93bfdc"
	I1007 12:48:34.794782 1605045 logs.go:123] Gathering logs for kube-proxy [e0b54b6d912b9cac2cbd658159c366466e160b149d33915f3e96ca1d509f139a] ...
	I1007 12:48:34.794815 1605045 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e0b54b6d912b9cac2cbd658159c366466e160b149d33915f3e96ca1d509f139a"
	I1007 12:48:34.834055 1605045 logs.go:123] Gathering logs for kube-proxy [1d7b57cafcedf477410702249e618a5b428f449129dfe864c4e92565b31aa7da] ...
	I1007 12:48:34.834081 1605045 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1d7b57cafcedf477410702249e618a5b428f449129dfe864c4e92565b31aa7da"
	I1007 12:48:34.876737 1605045 logs.go:123] Gathering logs for kube-controller-manager [1242dd7b1b01beccce95d3398556baedca1c8a45e6c29f05bf39e03c50f5f0ca] ...
	I1007 12:48:34.876765 1605045 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1242dd7b1b01beccce95d3398556baedca1c8a45e6c29f05bf39e03c50f5f0ca"
	I1007 12:48:34.934494 1605045 logs.go:123] Gathering logs for coredns [4bb2fa0793733df0e143059afecc82f6c14eff08bfc7bfdd197dd1f5722e7d55] ...
	I1007 12:48:34.934534 1605045 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4bb2fa0793733df0e143059afecc82f6c14eff08bfc7bfdd197dd1f5722e7d55"
	I1007 12:48:34.983022 1605045 logs.go:123] Gathering logs for coredns [8805bd4c8f5e006eb0f0abb45a40c3d0696ac379978061faa21852c4370a056a] ...
	I1007 12:48:34.983052 1605045 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8805bd4c8f5e006eb0f0abb45a40c3d0696ac379978061faa21852c4370a056a"
	I1007 12:48:35.030424 1605045 logs.go:123] Gathering logs for kube-controller-manager [07a060d66ad9e5daa5e626e4bb7e26d30ff2ed7a40ea150c1cd31786b35e58df] ...
	I1007 12:48:35.030499 1605045 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 07a060d66ad9e5daa5e626e4bb7e26d30ff2ed7a40ea150c1cd31786b35e58df"
	I1007 12:48:35.101750 1605045 logs.go:123] Gathering logs for kindnet [9e5fd41f9e9f5bb083bcc0a91b11894d2f6d83235dbe7b915428c7f7f5fe0fa1] ...
	I1007 12:48:35.101794 1605045 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9e5fd41f9e9f5bb083bcc0a91b11894d2f6d83235dbe7b915428c7f7f5fe0fa1"
	I1007 12:48:35.163310 1605045 logs.go:123] Gathering logs for storage-provisioner [a0fd0c4da3d885bfd3e984d4118d66727c79c42fb58e4d7d63b72a2a3ad14444] ...
	I1007 12:48:35.163344 1605045 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a0fd0c4da3d885bfd3e984d4118d66727c79c42fb58e4d7d63b72a2a3ad14444"
	I1007 12:48:35.233162 1605045 logs.go:123] Gathering logs for storage-provisioner [4e279b29e878213ed1a543c131a7d9f90f5b59daeb83257e0bf1c6513ff53468] ...
	I1007 12:48:35.233194 1605045 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4e279b29e878213ed1a543c131a7d9f90f5b59daeb83257e0bf1c6513ff53468"
	I1007 12:48:35.301675 1605045 logs.go:123] Gathering logs for kubernetes-dashboard [ee2b00422af4d052a720f617e5504040c0d7e9f2bff2f350b1c71f90cd16a1b0] ...
	I1007 12:48:35.301700 1605045 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ee2b00422af4d052a720f617e5504040c0d7e9f2bff2f350b1c71f90cd16a1b0"
	I1007 12:48:35.342492 1605045 logs.go:123] Gathering logs for kube-apiserver [c560d5c1bf2a37b5133996a47b9fd87c6142a14ff27e284676c132938457dd1a] ...
	I1007 12:48:35.342563 1605045 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c560d5c1bf2a37b5133996a47b9fd87c6142a14ff27e284676c132938457dd1a"
	I1007 12:48:35.404529 1605045 logs.go:123] Gathering logs for kube-apiserver [087491a883dc7a00d19cb0b6f1aa66d75a5fdf4285af40a17d54c064db2cfb38] ...
	I1007 12:48:35.404565 1605045 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 087491a883dc7a00d19cb0b6f1aa66d75a5fdf4285af40a17d54c064db2cfb38"
	I1007 12:48:35.470385 1605045 logs.go:123] Gathering logs for etcd [1cda9d232b1a29681f59212c1398e49986b28471693e5b8ec3e1096d4b08664b] ...
	I1007 12:48:35.470418 1605045 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1cda9d232b1a29681f59212c1398e49986b28471693e5b8ec3e1096d4b08664b"
	I1007 12:48:35.513496 1605045 out.go:358] Setting ErrFile to fd 2...
	I1007 12:48:35.513522 1605045 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1007 12:48:35.513597 1605045 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W1007 12:48:35.513608 1605045 out.go:270]   Oct 07 12:48:06 old-k8s-version-130031 kubelet[658]: E1007 12:48:06.373474     658 pod_workers.go:191] Error syncing pod 65909a33-e70d-4615-8ba7-4d6c9d9f6dd7 ("dashboard-metrics-scraper-8d5bb5db8-b54bs_kubernetes-dashboard(65909a33-e70d-4615-8ba7-4d6c9d9f6dd7)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-b54bs_kubernetes-dashboard(65909a33-e70d-4615-8ba7-4d6c9d9f6dd7)"
	  Oct 07 12:48:06 old-k8s-version-130031 kubelet[658]: E1007 12:48:06.373474     658 pod_workers.go:191] Error syncing pod 65909a33-e70d-4615-8ba7-4d6c9d9f6dd7 ("dashboard-metrics-scraper-8d5bb5db8-b54bs_kubernetes-dashboard(65909a33-e70d-4615-8ba7-4d6c9d9f6dd7)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-b54bs_kubernetes-dashboard(65909a33-e70d-4615-8ba7-4d6c9d9f6dd7)"
	W1007 12:48:35.513617 1605045 out.go:270]   Oct 07 12:48:10 old-k8s-version-130031 kubelet[658]: E1007 12:48:10.374205     658 pod_workers.go:191] Error syncing pod a3e21a3e-2a59-405e-9af7-0f2a4a27ed62 ("metrics-server-9975d5f86-vkrtw_kube-system(a3e21a3e-2a59-405e-9af7-0f2a4a27ed62)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	  Oct 07 12:48:10 old-k8s-version-130031 kubelet[658]: E1007 12:48:10.374205     658 pod_workers.go:191] Error syncing pod a3e21a3e-2a59-405e-9af7-0f2a4a27ed62 ("metrics-server-9975d5f86-vkrtw_kube-system(a3e21a3e-2a59-405e-9af7-0f2a4a27ed62)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1007 12:48:35.513670 1605045 out.go:270]   Oct 07 12:48:19 old-k8s-version-130031 kubelet[658]: E1007 12:48:19.373191     658 pod_workers.go:191] Error syncing pod 65909a33-e70d-4615-8ba7-4d6c9d9f6dd7 ("dashboard-metrics-scraper-8d5bb5db8-b54bs_kubernetes-dashboard(65909a33-e70d-4615-8ba7-4d6c9d9f6dd7)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-b54bs_kubernetes-dashboard(65909a33-e70d-4615-8ba7-4d6c9d9f6dd7)"
	  Oct 07 12:48:19 old-k8s-version-130031 kubelet[658]: E1007 12:48:19.373191     658 pod_workers.go:191] Error syncing pod 65909a33-e70d-4615-8ba7-4d6c9d9f6dd7 ("dashboard-metrics-scraper-8d5bb5db8-b54bs_kubernetes-dashboard(65909a33-e70d-4615-8ba7-4d6c9d9f6dd7)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-b54bs_kubernetes-dashboard(65909a33-e70d-4615-8ba7-4d6c9d9f6dd7)"
	W1007 12:48:35.513684 1605045 out.go:270]   Oct 07 12:48:22 old-k8s-version-130031 kubelet[658]: E1007 12:48:22.377528     658 pod_workers.go:191] Error syncing pod a3e21a3e-2a59-405e-9af7-0f2a4a27ed62 ("metrics-server-9975d5f86-vkrtw_kube-system(a3e21a3e-2a59-405e-9af7-0f2a4a27ed62)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	  Oct 07 12:48:22 old-k8s-version-130031 kubelet[658]: E1007 12:48:22.377528     658 pod_workers.go:191] Error syncing pod a3e21a3e-2a59-405e-9af7-0f2a4a27ed62 ("metrics-server-9975d5f86-vkrtw_kube-system(a3e21a3e-2a59-405e-9af7-0f2a4a27ed62)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1007 12:48:35.513690 1605045 out.go:270]   Oct 07 12:48:33 old-k8s-version-130031 kubelet[658]: E1007 12:48:33.373332     658 pod_workers.go:191] Error syncing pod 65909a33-e70d-4615-8ba7-4d6c9d9f6dd7 ("dashboard-metrics-scraper-8d5bb5db8-b54bs_kubernetes-dashboard(65909a33-e70d-4615-8ba7-4d6c9d9f6dd7)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-b54bs_kubernetes-dashboard(65909a33-e70d-4615-8ba7-4d6c9d9f6dd7)"
	  Oct 07 12:48:33 old-k8s-version-130031 kubelet[658]: E1007 12:48:33.373332     658 pod_workers.go:191] Error syncing pod 65909a33-e70d-4615-8ba7-4d6c9d9f6dd7 ("dashboard-metrics-scraper-8d5bb5db8-b54bs_kubernetes-dashboard(65909a33-e70d-4615-8ba7-4d6c9d9f6dd7)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-b54bs_kubernetes-dashboard(65909a33-e70d-4615-8ba7-4d6c9d9f6dd7)"
	I1007 12:48:35.513702 1605045 out.go:358] Setting ErrFile to fd 2...
	I1007 12:48:35.513710 1605045 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 12:48:45.514839 1605045 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 12:48:45.529032 1605045 api_server.go:72] duration metric: took 5m53.378623593s to wait for apiserver process to appear ...
	I1007 12:48:45.529060 1605045 api_server.go:88] waiting for apiserver healthz status ...
	I1007 12:48:45.529095 1605045 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1007 12:48:45.529154 1605045 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1007 12:48:45.575104 1605045 cri.go:89] found id: "c560d5c1bf2a37b5133996a47b9fd87c6142a14ff27e284676c132938457dd1a"
	I1007 12:48:45.575123 1605045 cri.go:89] found id: "087491a883dc7a00d19cb0b6f1aa66d75a5fdf4285af40a17d54c064db2cfb38"
	I1007 12:48:45.575129 1605045 cri.go:89] found id: ""
	I1007 12:48:45.575135 1605045 logs.go:282] 2 containers: [c560d5c1bf2a37b5133996a47b9fd87c6142a14ff27e284676c132938457dd1a 087491a883dc7a00d19cb0b6f1aa66d75a5fdf4285af40a17d54c064db2cfb38]
	I1007 12:48:45.575192 1605045 ssh_runner.go:195] Run: which crictl
	I1007 12:48:45.578978 1605045 ssh_runner.go:195] Run: which crictl
	I1007 12:48:45.582407 1605045 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1007 12:48:45.582478 1605045 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1007 12:48:45.629320 1605045 cri.go:89] found id: "1e0947c0d1d23430fec3b7273c7a245208911a9fd18651ee2ea30452df93bfdc"
	I1007 12:48:45.629342 1605045 cri.go:89] found id: "1cda9d232b1a29681f59212c1398e49986b28471693e5b8ec3e1096d4b08664b"
	I1007 12:48:45.629347 1605045 cri.go:89] found id: ""
	I1007 12:48:45.629353 1605045 logs.go:282] 2 containers: [1e0947c0d1d23430fec3b7273c7a245208911a9fd18651ee2ea30452df93bfdc 1cda9d232b1a29681f59212c1398e49986b28471693e5b8ec3e1096d4b08664b]
	I1007 12:48:45.629409 1605045 ssh_runner.go:195] Run: which crictl
	I1007 12:48:45.633005 1605045 ssh_runner.go:195] Run: which crictl
	I1007 12:48:45.636292 1605045 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1007 12:48:45.636360 1605045 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1007 12:48:45.672536 1605045 cri.go:89] found id: "4bb2fa0793733df0e143059afecc82f6c14eff08bfc7bfdd197dd1f5722e7d55"
	I1007 12:48:45.672558 1605045 cri.go:89] found id: "8805bd4c8f5e006eb0f0abb45a40c3d0696ac379978061faa21852c4370a056a"
	I1007 12:48:45.672563 1605045 cri.go:89] found id: ""
	I1007 12:48:45.672570 1605045 logs.go:282] 2 containers: [4bb2fa0793733df0e143059afecc82f6c14eff08bfc7bfdd197dd1f5722e7d55 8805bd4c8f5e006eb0f0abb45a40c3d0696ac379978061faa21852c4370a056a]
	I1007 12:48:45.672627 1605045 ssh_runner.go:195] Run: which crictl
	I1007 12:48:45.676578 1605045 ssh_runner.go:195] Run: which crictl
	I1007 12:48:45.679886 1605045 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1007 12:48:45.679950 1605045 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1007 12:48:45.726013 1605045 cri.go:89] found id: "8f43fed348ba1cdb7ae28e497ad28f67977db09949a112d2d17870df4a117f0a"
	I1007 12:48:45.726039 1605045 cri.go:89] found id: "56445c334390ae25dcd546cbbe4c491fdc1651b3a5a4d8815c363c7a8bfbb990"
	I1007 12:48:45.726044 1605045 cri.go:89] found id: ""
	I1007 12:48:45.726053 1605045 logs.go:282] 2 containers: [8f43fed348ba1cdb7ae28e497ad28f67977db09949a112d2d17870df4a117f0a 56445c334390ae25dcd546cbbe4c491fdc1651b3a5a4d8815c363c7a8bfbb990]
	I1007 12:48:45.726108 1605045 ssh_runner.go:195] Run: which crictl
	I1007 12:48:45.729958 1605045 ssh_runner.go:195] Run: which crictl
	I1007 12:48:45.733303 1605045 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1007 12:48:45.733377 1605045 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1007 12:48:45.772251 1605045 cri.go:89] found id: "e0b54b6d912b9cac2cbd658159c366466e160b149d33915f3e96ca1d509f139a"
	I1007 12:48:45.772273 1605045 cri.go:89] found id: "1d7b57cafcedf477410702249e618a5b428f449129dfe864c4e92565b31aa7da"
	I1007 12:48:45.772278 1605045 cri.go:89] found id: ""
	I1007 12:48:45.772286 1605045 logs.go:282] 2 containers: [e0b54b6d912b9cac2cbd658159c366466e160b149d33915f3e96ca1d509f139a 1d7b57cafcedf477410702249e618a5b428f449129dfe864c4e92565b31aa7da]
	I1007 12:48:45.772341 1605045 ssh_runner.go:195] Run: which crictl
	I1007 12:48:45.777423 1605045 ssh_runner.go:195] Run: which crictl
	I1007 12:48:45.781563 1605045 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1007 12:48:45.781630 1605045 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1007 12:48:45.817672 1605045 cri.go:89] found id: "07a060d66ad9e5daa5e626e4bb7e26d30ff2ed7a40ea150c1cd31786b35e58df"
	I1007 12:48:45.817702 1605045 cri.go:89] found id: "1242dd7b1b01beccce95d3398556baedca1c8a45e6c29f05bf39e03c50f5f0ca"
	I1007 12:48:45.817706 1605045 cri.go:89] found id: ""
	I1007 12:48:45.817714 1605045 logs.go:282] 2 containers: [07a060d66ad9e5daa5e626e4bb7e26d30ff2ed7a40ea150c1cd31786b35e58df 1242dd7b1b01beccce95d3398556baedca1c8a45e6c29f05bf39e03c50f5f0ca]
	I1007 12:48:45.817770 1605045 ssh_runner.go:195] Run: which crictl
	I1007 12:48:45.821567 1605045 ssh_runner.go:195] Run: which crictl
	I1007 12:48:45.824945 1605045 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1007 12:48:45.825011 1605045 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1007 12:48:45.864398 1605045 cri.go:89] found id: "8cf6219e2513777cac33b5c910a4793a59ef8cc60b43f7ab0383fcb48b0249b4"
	I1007 12:48:45.864418 1605045 cri.go:89] found id: "9e5fd41f9e9f5bb083bcc0a91b11894d2f6d83235dbe7b915428c7f7f5fe0fa1"
	I1007 12:48:45.864422 1605045 cri.go:89] found id: ""
	I1007 12:48:45.864435 1605045 logs.go:282] 2 containers: [8cf6219e2513777cac33b5c910a4793a59ef8cc60b43f7ab0383fcb48b0249b4 9e5fd41f9e9f5bb083bcc0a91b11894d2f6d83235dbe7b915428c7f7f5fe0fa1]
	I1007 12:48:45.864491 1605045 ssh_runner.go:195] Run: which crictl
	I1007 12:48:45.868357 1605045 ssh_runner.go:195] Run: which crictl
	I1007 12:48:45.871596 1605045 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1007 12:48:45.871671 1605045 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1007 12:48:45.911632 1605045 cri.go:89] found id: "ee2b00422af4d052a720f617e5504040c0d7e9f2bff2f350b1c71f90cd16a1b0"
	I1007 12:48:45.911653 1605045 cri.go:89] found id: ""
	I1007 12:48:45.911662 1605045 logs.go:282] 1 containers: [ee2b00422af4d052a720f617e5504040c0d7e9f2bff2f350b1c71f90cd16a1b0]
	I1007 12:48:45.911716 1605045 ssh_runner.go:195] Run: which crictl
	I1007 12:48:45.915510 1605045 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1007 12:48:45.915660 1605045 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1007 12:48:45.953579 1605045 cri.go:89] found id: "a0fd0c4da3d885bfd3e984d4118d66727c79c42fb58e4d7d63b72a2a3ad14444"
	I1007 12:48:45.953604 1605045 cri.go:89] found id: "4e279b29e878213ed1a543c131a7d9f90f5b59daeb83257e0bf1c6513ff53468"
	I1007 12:48:45.953609 1605045 cri.go:89] found id: ""
	I1007 12:48:45.953616 1605045 logs.go:282] 2 containers: [a0fd0c4da3d885bfd3e984d4118d66727c79c42fb58e4d7d63b72a2a3ad14444 4e279b29e878213ed1a543c131a7d9f90f5b59daeb83257e0bf1c6513ff53468]
	I1007 12:48:45.953678 1605045 ssh_runner.go:195] Run: which crictl
	I1007 12:48:45.957566 1605045 ssh_runner.go:195] Run: which crictl
	I1007 12:48:45.961195 1605045 logs.go:123] Gathering logs for kube-scheduler [8f43fed348ba1cdb7ae28e497ad28f67977db09949a112d2d17870df4a117f0a] ...
	I1007 12:48:45.961225 1605045 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8f43fed348ba1cdb7ae28e497ad28f67977db09949a112d2d17870df4a117f0a"
	I1007 12:48:46.020263 1605045 logs.go:123] Gathering logs for kube-proxy [e0b54b6d912b9cac2cbd658159c366466e160b149d33915f3e96ca1d509f139a] ...
	I1007 12:48:46.020304 1605045 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e0b54b6d912b9cac2cbd658159c366466e160b149d33915f3e96ca1d509f139a"
	I1007 12:48:46.069672 1605045 logs.go:123] Gathering logs for kube-proxy [1d7b57cafcedf477410702249e618a5b428f449129dfe864c4e92565b31aa7da] ...
	I1007 12:48:46.069706 1605045 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1d7b57cafcedf477410702249e618a5b428f449129dfe864c4e92565b31aa7da"
	I1007 12:48:46.114842 1605045 logs.go:123] Gathering logs for kube-controller-manager [07a060d66ad9e5daa5e626e4bb7e26d30ff2ed7a40ea150c1cd31786b35e58df] ...
	I1007 12:48:46.114882 1605045 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 07a060d66ad9e5daa5e626e4bb7e26d30ff2ed7a40ea150c1cd31786b35e58df"
	I1007 12:48:46.195968 1605045 logs.go:123] Gathering logs for kubelet ...
	I1007 12:48:46.196002 1605045 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1007 12:48:46.266565 1605045 logs.go:138] Found kubelet problem: Oct 07 12:43:13 old-k8s-version-130031 kubelet[658]: E1007 12:43:13.872520     658 reflector.go:138] object-"kube-system"/"storage-provisioner-token-ktk5h": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "storage-provisioner-token-ktk5h" is forbidden: User "system:node:old-k8s-version-130031" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-130031' and this object
	W1007 12:48:46.266905 1605045 logs.go:138] Found kubelet problem: Oct 07 12:43:13 old-k8s-version-130031 kubelet[658]: E1007 12:43:13.891855     658 reflector.go:138] object-"kube-system"/"kindnet-token-srnxf": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kindnet-token-srnxf" is forbidden: User "system:node:old-k8s-version-130031" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-130031' and this object
	W1007 12:48:46.267119 1605045 logs.go:138] Found kubelet problem: Oct 07 12:43:13 old-k8s-version-130031 kubelet[658]: E1007 12:43:13.891938     658 reflector.go:138] object-"kube-system"/"coredns-token-627jj": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "coredns-token-627jj" is forbidden: User "system:node:old-k8s-version-130031" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-130031' and this object
	W1007 12:48:46.267333 1605045 logs.go:138] Found kubelet problem: Oct 07 12:43:13 old-k8s-version-130031 kubelet[658]: E1007 12:43:13.893244     658 reflector.go:138] object-"kube-system"/"kube-proxy-token-khb44": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kube-proxy-token-khb44" is forbidden: User "system:node:old-k8s-version-130031" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-130031' and this object
	W1007 12:48:46.267575 1605045 logs.go:138] Found kubelet problem: Oct 07 12:43:13 old-k8s-version-130031 kubelet[658]: E1007 12:43:13.893311     658 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:old-k8s-version-130031" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-130031' and this object
	W1007 12:48:46.267776 1605045 logs.go:138] Found kubelet problem: Oct 07 12:43:13 old-k8s-version-130031 kubelet[658]: E1007 12:43:13.893344     658 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:old-k8s-version-130031" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-130031' and this object
	W1007 12:48:46.267983 1605045 logs.go:138] Found kubelet problem: Oct 07 12:43:13 old-k8s-version-130031 kubelet[658]: E1007 12:43:13.952228     658 reflector.go:138] object-"default"/"default-token-nq6kr": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "default-token-nq6kr" is forbidden: User "system:node:old-k8s-version-130031" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'old-k8s-version-130031' and this object
	W1007 12:48:46.268201 1605045 logs.go:138] Found kubelet problem: Oct 07 12:43:13 old-k8s-version-130031 kubelet[658]: E1007 12:43:13.952337     658 reflector.go:138] object-"kube-system"/"metrics-server-token-t2mrc": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "metrics-server-token-t2mrc" is forbidden: User "system:node:old-k8s-version-130031" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-130031' and this object
	W1007 12:48:46.282208 1605045 logs.go:138] Found kubelet problem: Oct 07 12:43:15 old-k8s-version-130031 kubelet[658]: E1007 12:43:15.871648     658 pod_workers.go:191] Error syncing pod a3e21a3e-2a59-405e-9af7-0f2a4a27ed62 ("metrics-server-9975d5f86-vkrtw_kube-system(a3e21a3e-2a59-405e-9af7-0f2a4a27ed62)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W1007 12:48:46.282707 1605045 logs.go:138] Found kubelet problem: Oct 07 12:43:16 old-k8s-version-130031 kubelet[658]: E1007 12:43:16.563851     658 pod_workers.go:191] Error syncing pod a3e21a3e-2a59-405e-9af7-0f2a4a27ed62 ("metrics-server-9975d5f86-vkrtw_kube-system(a3e21a3e-2a59-405e-9af7-0f2a4a27ed62)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1007 12:48:46.286279 1605045 logs.go:138] Found kubelet problem: Oct 07 12:43:30 old-k8s-version-130031 kubelet[658]: E1007 12:43:30.401060     658 pod_workers.go:191] Error syncing pod a3e21a3e-2a59-405e-9af7-0f2a4a27ed62 ("metrics-server-9975d5f86-vkrtw_kube-system(a3e21a3e-2a59-405e-9af7-0f2a4a27ed62)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W1007 12:48:46.288534 1605045 logs.go:138] Found kubelet problem: Oct 07 12:43:44 old-k8s-version-130031 kubelet[658]: E1007 12:43:44.377295     658 pod_workers.go:191] Error syncing pod a3e21a3e-2a59-405e-9af7-0f2a4a27ed62 ("metrics-server-9975d5f86-vkrtw_kube-system(a3e21a3e-2a59-405e-9af7-0f2a4a27ed62)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1007 12:48:46.288993 1605045 logs.go:138] Found kubelet problem: Oct 07 12:43:44 old-k8s-version-130031 kubelet[658]: E1007 12:43:44.733029     658 pod_workers.go:191] Error syncing pod 65909a33-e70d-4615-8ba7-4d6c9d9f6dd7 ("dashboard-metrics-scraper-8d5bb5db8-b54bs_kubernetes-dashboard(65909a33-e70d-4615-8ba7-4d6c9d9f6dd7)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-b54bs_kubernetes-dashboard(65909a33-e70d-4615-8ba7-4d6c9d9f6dd7)"
	W1007 12:48:46.289330 1605045 logs.go:138] Found kubelet problem: Oct 07 12:43:45 old-k8s-version-130031 kubelet[658]: E1007 12:43:45.736335     658 pod_workers.go:191] Error syncing pod 65909a33-e70d-4615-8ba7-4d6c9d9f6dd7 ("dashboard-metrics-scraper-8d5bb5db8-b54bs_kubernetes-dashboard(65909a33-e70d-4615-8ba7-4d6c9d9f6dd7)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-b54bs_kubernetes-dashboard(65909a33-e70d-4615-8ba7-4d6c9d9f6dd7)"
	W1007 12:48:46.289765 1605045 logs.go:138] Found kubelet problem: Oct 07 12:43:47 old-k8s-version-130031 kubelet[658]: E1007 12:43:47.745194     658 pod_workers.go:191] Error syncing pod 2562f693-2c1c-4966-9978-9712666b4812 ("storage-provisioner_kube-system(2562f693-2c1c-4966-9978-9712666b4812)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(2562f693-2c1c-4966-9978-9712666b4812)"
	W1007 12:48:46.290420 1605045 logs.go:138] Found kubelet problem: Oct 07 12:43:53 old-k8s-version-130031 kubelet[658]: E1007 12:43:53.692850     658 pod_workers.go:191] Error syncing pod 65909a33-e70d-4615-8ba7-4d6c9d9f6dd7 ("dashboard-metrics-scraper-8d5bb5db8-b54bs_kubernetes-dashboard(65909a33-e70d-4615-8ba7-4d6c9d9f6dd7)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-b54bs_kubernetes-dashboard(65909a33-e70d-4615-8ba7-4d6c9d9f6dd7)"
	W1007 12:48:46.293037 1605045 logs.go:138] Found kubelet problem: Oct 07 12:43:59 old-k8s-version-130031 kubelet[658]: E1007 12:43:59.383291     658 pod_workers.go:191] Error syncing pod a3e21a3e-2a59-405e-9af7-0f2a4a27ed62 ("metrics-server-9975d5f86-vkrtw_kube-system(a3e21a3e-2a59-405e-9af7-0f2a4a27ed62)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W1007 12:48:46.293627 1605045 logs.go:138] Found kubelet problem: Oct 07 12:44:06 old-k8s-version-130031 kubelet[658]: E1007 12:44:06.798826     658 pod_workers.go:191] Error syncing pod 65909a33-e70d-4615-8ba7-4d6c9d9f6dd7 ("dashboard-metrics-scraper-8d5bb5db8-b54bs_kubernetes-dashboard(65909a33-e70d-4615-8ba7-4d6c9d9f6dd7)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-b54bs_kubernetes-dashboard(65909a33-e70d-4615-8ba7-4d6c9d9f6dd7)"
	W1007 12:48:46.293813 1605045 logs.go:138] Found kubelet problem: Oct 07 12:44:10 old-k8s-version-130031 kubelet[658]: E1007 12:44:10.374328     658 pod_workers.go:191] Error syncing pod a3e21a3e-2a59-405e-9af7-0f2a4a27ed62 ("metrics-server-9975d5f86-vkrtw_kube-system(a3e21a3e-2a59-405e-9af7-0f2a4a27ed62)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1007 12:48:46.294135 1605045 logs.go:138] Found kubelet problem: Oct 07 12:44:13 old-k8s-version-130031 kubelet[658]: E1007 12:44:13.693437     658 pod_workers.go:191] Error syncing pod 65909a33-e70d-4615-8ba7-4d6c9d9f6dd7 ("dashboard-metrics-scraper-8d5bb5db8-b54bs_kubernetes-dashboard(65909a33-e70d-4615-8ba7-4d6c9d9f6dd7)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-b54bs_kubernetes-dashboard(65909a33-e70d-4615-8ba7-4d6c9d9f6dd7)"
	W1007 12:48:46.294318 1605045 logs.go:138] Found kubelet problem: Oct 07 12:44:21 old-k8s-version-130031 kubelet[658]: E1007 12:44:21.373629     658 pod_workers.go:191] Error syncing pod a3e21a3e-2a59-405e-9af7-0f2a4a27ed62 ("metrics-server-9975d5f86-vkrtw_kube-system(a3e21a3e-2a59-405e-9af7-0f2a4a27ed62)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1007 12:48:46.294642 1605045 logs.go:138] Found kubelet problem: Oct 07 12:44:26 old-k8s-version-130031 kubelet[658]: E1007 12:44:26.373370     658 pod_workers.go:191] Error syncing pod 65909a33-e70d-4615-8ba7-4d6c9d9f6dd7 ("dashboard-metrics-scraper-8d5bb5db8-b54bs_kubernetes-dashboard(65909a33-e70d-4615-8ba7-4d6c9d9f6dd7)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-b54bs_kubernetes-dashboard(65909a33-e70d-4615-8ba7-4d6c9d9f6dd7)"
	W1007 12:48:46.294826 1605045 logs.go:138] Found kubelet problem: Oct 07 12:44:35 old-k8s-version-130031 kubelet[658]: E1007 12:44:35.373708     658 pod_workers.go:191] Error syncing pod a3e21a3e-2a59-405e-9af7-0f2a4a27ed62 ("metrics-server-9975d5f86-vkrtw_kube-system(a3e21a3e-2a59-405e-9af7-0f2a4a27ed62)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1007 12:48:46.295405 1605045 logs.go:138] Found kubelet problem: Oct 07 12:44:40 old-k8s-version-130031 kubelet[658]: E1007 12:44:40.884522     658 pod_workers.go:191] Error syncing pod 65909a33-e70d-4615-8ba7-4d6c9d9f6dd7 ("dashboard-metrics-scraper-8d5bb5db8-b54bs_kubernetes-dashboard(65909a33-e70d-4615-8ba7-4d6c9d9f6dd7)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-b54bs_kubernetes-dashboard(65909a33-e70d-4615-8ba7-4d6c9d9f6dd7)"
	W1007 12:48:46.295738 1605045 logs.go:138] Found kubelet problem: Oct 07 12:44:43 old-k8s-version-130031 kubelet[658]: E1007 12:44:43.693924     658 pod_workers.go:191] Error syncing pod 65909a33-e70d-4615-8ba7-4d6c9d9f6dd7 ("dashboard-metrics-scraper-8d5bb5db8-b54bs_kubernetes-dashboard(65909a33-e70d-4615-8ba7-4d6c9d9f6dd7)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-b54bs_kubernetes-dashboard(65909a33-e70d-4615-8ba7-4d6c9d9f6dd7)"
	W1007 12:48:46.298191 1605045 logs.go:138] Found kubelet problem: Oct 07 12:44:46 old-k8s-version-130031 kubelet[658]: E1007 12:44:46.384700     658 pod_workers.go:191] Error syncing pod a3e21a3e-2a59-405e-9af7-0f2a4a27ed62 ("metrics-server-9975d5f86-vkrtw_kube-system(a3e21a3e-2a59-405e-9af7-0f2a4a27ed62)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W1007 12:48:46.298518 1605045 logs.go:138] Found kubelet problem: Oct 07 12:44:57 old-k8s-version-130031 kubelet[658]: E1007 12:44:57.373721     658 pod_workers.go:191] Error syncing pod 65909a33-e70d-4615-8ba7-4d6c9d9f6dd7 ("dashboard-metrics-scraper-8d5bb5db8-b54bs_kubernetes-dashboard(65909a33-e70d-4615-8ba7-4d6c9d9f6dd7)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-b54bs_kubernetes-dashboard(65909a33-e70d-4615-8ba7-4d6c9d9f6dd7)"
	W1007 12:48:46.298703 1605045 logs.go:138] Found kubelet problem: Oct 07 12:44:59 old-k8s-version-130031 kubelet[658]: E1007 12:44:59.375281     658 pod_workers.go:191] Error syncing pod a3e21a3e-2a59-405e-9af7-0f2a4a27ed62 ("metrics-server-9975d5f86-vkrtw_kube-system(a3e21a3e-2a59-405e-9af7-0f2a4a27ed62)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1007 12:48:46.299029 1605045 logs.go:138] Found kubelet problem: Oct 07 12:45:09 old-k8s-version-130031 kubelet[658]: E1007 12:45:09.373255     658 pod_workers.go:191] Error syncing pod 65909a33-e70d-4615-8ba7-4d6c9d9f6dd7 ("dashboard-metrics-scraper-8d5bb5db8-b54bs_kubernetes-dashboard(65909a33-e70d-4615-8ba7-4d6c9d9f6dd7)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-b54bs_kubernetes-dashboard(65909a33-e70d-4615-8ba7-4d6c9d9f6dd7)"
	W1007 12:48:46.299214 1605045 logs.go:138] Found kubelet problem: Oct 07 12:45:11 old-k8s-version-130031 kubelet[658]: E1007 12:45:11.373876     658 pod_workers.go:191] Error syncing pod a3e21a3e-2a59-405e-9af7-0f2a4a27ed62 ("metrics-server-9975d5f86-vkrtw_kube-system(a3e21a3e-2a59-405e-9af7-0f2a4a27ed62)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1007 12:48:46.299803 1605045 logs.go:138] Found kubelet problem: Oct 07 12:45:22 old-k8s-version-130031 kubelet[658]: E1007 12:45:21.996110     658 pod_workers.go:191] Error syncing pod 65909a33-e70d-4615-8ba7-4d6c9d9f6dd7 ("dashboard-metrics-scraper-8d5bb5db8-b54bs_kubernetes-dashboard(65909a33-e70d-4615-8ba7-4d6c9d9f6dd7)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-b54bs_kubernetes-dashboard(65909a33-e70d-4615-8ba7-4d6c9d9f6dd7)"
	W1007 12:48:46.300128 1605045 logs.go:138] Found kubelet problem: Oct 07 12:45:23 old-k8s-version-130031 kubelet[658]: E1007 12:45:23.693664     658 pod_workers.go:191] Error syncing pod 65909a33-e70d-4615-8ba7-4d6c9d9f6dd7 ("dashboard-metrics-scraper-8d5bb5db8-b54bs_kubernetes-dashboard(65909a33-e70d-4615-8ba7-4d6c9d9f6dd7)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-b54bs_kubernetes-dashboard(65909a33-e70d-4615-8ba7-4d6c9d9f6dd7)"
	W1007 12:48:46.300311 1605045 logs.go:138] Found kubelet problem: Oct 07 12:45:26 old-k8s-version-130031 kubelet[658]: E1007 12:45:26.374341     658 pod_workers.go:191] Error syncing pod a3e21a3e-2a59-405e-9af7-0f2a4a27ed62 ("metrics-server-9975d5f86-vkrtw_kube-system(a3e21a3e-2a59-405e-9af7-0f2a4a27ed62)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1007 12:48:46.300635 1605045 logs.go:138] Found kubelet problem: Oct 07 12:45:38 old-k8s-version-130031 kubelet[658]: E1007 12:45:38.374842     658 pod_workers.go:191] Error syncing pod 65909a33-e70d-4615-8ba7-4d6c9d9f6dd7 ("dashboard-metrics-scraper-8d5bb5db8-b54bs_kubernetes-dashboard(65909a33-e70d-4615-8ba7-4d6c9d9f6dd7)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-b54bs_kubernetes-dashboard(65909a33-e70d-4615-8ba7-4d6c9d9f6dd7)"
	W1007 12:48:46.300822 1605045 logs.go:138] Found kubelet problem: Oct 07 12:45:38 old-k8s-version-130031 kubelet[658]: E1007 12:45:38.377418     658 pod_workers.go:191] Error syncing pod a3e21a3e-2a59-405e-9af7-0f2a4a27ed62 ("metrics-server-9975d5f86-vkrtw_kube-system(a3e21a3e-2a59-405e-9af7-0f2a4a27ed62)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1007 12:48:46.301152 1605045 logs.go:138] Found kubelet problem: Oct 07 12:45:50 old-k8s-version-130031 kubelet[658]: E1007 12:45:50.375115     658 pod_workers.go:191] Error syncing pod 65909a33-e70d-4615-8ba7-4d6c9d9f6dd7 ("dashboard-metrics-scraper-8d5bb5db8-b54bs_kubernetes-dashboard(65909a33-e70d-4615-8ba7-4d6c9d9f6dd7)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-b54bs_kubernetes-dashboard(65909a33-e70d-4615-8ba7-4d6c9d9f6dd7)"
	W1007 12:48:46.301335 1605045 logs.go:138] Found kubelet problem: Oct 07 12:45:50 old-k8s-version-130031 kubelet[658]: E1007 12:45:50.378298     658 pod_workers.go:191] Error syncing pod a3e21a3e-2a59-405e-9af7-0f2a4a27ed62 ("metrics-server-9975d5f86-vkrtw_kube-system(a3e21a3e-2a59-405e-9af7-0f2a4a27ed62)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1007 12:48:46.301518 1605045 logs.go:138] Found kubelet problem: Oct 07 12:46:01 old-k8s-version-130031 kubelet[658]: E1007 12:46:01.373996     658 pod_workers.go:191] Error syncing pod a3e21a3e-2a59-405e-9af7-0f2a4a27ed62 ("metrics-server-9975d5f86-vkrtw_kube-system(a3e21a3e-2a59-405e-9af7-0f2a4a27ed62)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1007 12:48:46.301842 1605045 logs.go:138] Found kubelet problem: Oct 07 12:46:03 old-k8s-version-130031 kubelet[658]: E1007 12:46:03.373238     658 pod_workers.go:191] Error syncing pod 65909a33-e70d-4615-8ba7-4d6c9d9f6dd7 ("dashboard-metrics-scraper-8d5bb5db8-b54bs_kubernetes-dashboard(65909a33-e70d-4615-8ba7-4d6c9d9f6dd7)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-b54bs_kubernetes-dashboard(65909a33-e70d-4615-8ba7-4d6c9d9f6dd7)"
	W1007 12:48:46.304301 1605045 logs.go:138] Found kubelet problem: Oct 07 12:46:14 old-k8s-version-130031 kubelet[658]: E1007 12:46:14.383215     658 pod_workers.go:191] Error syncing pod a3e21a3e-2a59-405e-9af7-0f2a4a27ed62 ("metrics-server-9975d5f86-vkrtw_kube-system(a3e21a3e-2a59-405e-9af7-0f2a4a27ed62)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W1007 12:48:46.304631 1605045 logs.go:138] Found kubelet problem: Oct 07 12:46:16 old-k8s-version-130031 kubelet[658]: E1007 12:46:16.373283     658 pod_workers.go:191] Error syncing pod 65909a33-e70d-4615-8ba7-4d6c9d9f6dd7 ("dashboard-metrics-scraper-8d5bb5db8-b54bs_kubernetes-dashboard(65909a33-e70d-4615-8ba7-4d6c9d9f6dd7)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-b54bs_kubernetes-dashboard(65909a33-e70d-4615-8ba7-4d6c9d9f6dd7)"
	W1007 12:48:46.304815 1605045 logs.go:138] Found kubelet problem: Oct 07 12:46:25 old-k8s-version-130031 kubelet[658]: E1007 12:46:25.373793     658 pod_workers.go:191] Error syncing pod a3e21a3e-2a59-405e-9af7-0f2a4a27ed62 ("metrics-server-9975d5f86-vkrtw_kube-system(a3e21a3e-2a59-405e-9af7-0f2a4a27ed62)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1007 12:48:46.305143 1605045 logs.go:138] Found kubelet problem: Oct 07 12:46:28 old-k8s-version-130031 kubelet[658]: E1007 12:46:28.373827     658 pod_workers.go:191] Error syncing pod 65909a33-e70d-4615-8ba7-4d6c9d9f6dd7 ("dashboard-metrics-scraper-8d5bb5db8-b54bs_kubernetes-dashboard(65909a33-e70d-4615-8ba7-4d6c9d9f6dd7)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-b54bs_kubernetes-dashboard(65909a33-e70d-4615-8ba7-4d6c9d9f6dd7)"
	W1007 12:48:46.305336 1605045 logs.go:138] Found kubelet problem: Oct 07 12:46:39 old-k8s-version-130031 kubelet[658]: E1007 12:46:39.373930     658 pod_workers.go:191] Error syncing pod a3e21a3e-2a59-405e-9af7-0f2a4a27ed62 ("metrics-server-9975d5f86-vkrtw_kube-system(a3e21a3e-2a59-405e-9af7-0f2a4a27ed62)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1007 12:48:46.305921 1605045 logs.go:138] Found kubelet problem: Oct 07 12:46:44 old-k8s-version-130031 kubelet[658]: E1007 12:46:44.239654     658 pod_workers.go:191] Error syncing pod 65909a33-e70d-4615-8ba7-4d6c9d9f6dd7 ("dashboard-metrics-scraper-8d5bb5db8-b54bs_kubernetes-dashboard(65909a33-e70d-4615-8ba7-4d6c9d9f6dd7)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-b54bs_kubernetes-dashboard(65909a33-e70d-4615-8ba7-4d6c9d9f6dd7)"
	W1007 12:48:46.306105 1605045 logs.go:138] Found kubelet problem: Oct 07 12:46:50 old-k8s-version-130031 kubelet[658]: E1007 12:46:50.374092     658 pod_workers.go:191] Error syncing pod a3e21a3e-2a59-405e-9af7-0f2a4a27ed62 ("metrics-server-9975d5f86-vkrtw_kube-system(a3e21a3e-2a59-405e-9af7-0f2a4a27ed62)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1007 12:48:46.306427 1605045 logs.go:138] Found kubelet problem: Oct 07 12:46:53 old-k8s-version-130031 kubelet[658]: E1007 12:46:53.693317     658 pod_workers.go:191] Error syncing pod 65909a33-e70d-4615-8ba7-4d6c9d9f6dd7 ("dashboard-metrics-scraper-8d5bb5db8-b54bs_kubernetes-dashboard(65909a33-e70d-4615-8ba7-4d6c9d9f6dd7)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-b54bs_kubernetes-dashboard(65909a33-e70d-4615-8ba7-4d6c9d9f6dd7)"
	W1007 12:48:46.306611 1605045 logs.go:138] Found kubelet problem: Oct 07 12:47:01 old-k8s-version-130031 kubelet[658]: E1007 12:47:01.373998     658 pod_workers.go:191] Error syncing pod a3e21a3e-2a59-405e-9af7-0f2a4a27ed62 ("metrics-server-9975d5f86-vkrtw_kube-system(a3e21a3e-2a59-405e-9af7-0f2a4a27ed62)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1007 12:48:46.306938 1605045 logs.go:138] Found kubelet problem: Oct 07 12:47:04 old-k8s-version-130031 kubelet[658]: E1007 12:47:04.374129     658 pod_workers.go:191] Error syncing pod 65909a33-e70d-4615-8ba7-4d6c9d9f6dd7 ("dashboard-metrics-scraper-8d5bb5db8-b54bs_kubernetes-dashboard(65909a33-e70d-4615-8ba7-4d6c9d9f6dd7)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-b54bs_kubernetes-dashboard(65909a33-e70d-4615-8ba7-4d6c9d9f6dd7)"
	W1007 12:48:46.307131 1605045 logs.go:138] Found kubelet problem: Oct 07 12:47:14 old-k8s-version-130031 kubelet[658]: E1007 12:47:14.373917     658 pod_workers.go:191] Error syncing pod a3e21a3e-2a59-405e-9af7-0f2a4a27ed62 ("metrics-server-9975d5f86-vkrtw_kube-system(a3e21a3e-2a59-405e-9af7-0f2a4a27ed62)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1007 12:48:46.307465 1605045 logs.go:138] Found kubelet problem: Oct 07 12:47:17 old-k8s-version-130031 kubelet[658]: E1007 12:47:17.373207     658 pod_workers.go:191] Error syncing pod 65909a33-e70d-4615-8ba7-4d6c9d9f6dd7 ("dashboard-metrics-scraper-8d5bb5db8-b54bs_kubernetes-dashboard(65909a33-e70d-4615-8ba7-4d6c9d9f6dd7)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-b54bs_kubernetes-dashboard(65909a33-e70d-4615-8ba7-4d6c9d9f6dd7)"
	W1007 12:48:46.307663 1605045 logs.go:138] Found kubelet problem: Oct 07 12:47:28 old-k8s-version-130031 kubelet[658]: E1007 12:47:28.373860     658 pod_workers.go:191] Error syncing pod a3e21a3e-2a59-405e-9af7-0f2a4a27ed62 ("metrics-server-9975d5f86-vkrtw_kube-system(a3e21a3e-2a59-405e-9af7-0f2a4a27ed62)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1007 12:48:46.308013 1605045 logs.go:138] Found kubelet problem: Oct 07 12:47:29 old-k8s-version-130031 kubelet[658]: E1007 12:47:29.373269     658 pod_workers.go:191] Error syncing pod 65909a33-e70d-4615-8ba7-4d6c9d9f6dd7 ("dashboard-metrics-scraper-8d5bb5db8-b54bs_kubernetes-dashboard(65909a33-e70d-4615-8ba7-4d6c9d9f6dd7)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-b54bs_kubernetes-dashboard(65909a33-e70d-4615-8ba7-4d6c9d9f6dd7)"
	W1007 12:48:46.308341 1605045 logs.go:138] Found kubelet problem: Oct 07 12:47:41 old-k8s-version-130031 kubelet[658]: E1007 12:47:41.373813     658 pod_workers.go:191] Error syncing pod 65909a33-e70d-4615-8ba7-4d6c9d9f6dd7 ("dashboard-metrics-scraper-8d5bb5db8-b54bs_kubernetes-dashboard(65909a33-e70d-4615-8ba7-4d6c9d9f6dd7)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-b54bs_kubernetes-dashboard(65909a33-e70d-4615-8ba7-4d6c9d9f6dd7)"
	W1007 12:48:46.308531 1605045 logs.go:138] Found kubelet problem: Oct 07 12:47:41 old-k8s-version-130031 kubelet[658]: E1007 12:47:41.373880     658 pod_workers.go:191] Error syncing pod a3e21a3e-2a59-405e-9af7-0f2a4a27ed62 ("metrics-server-9975d5f86-vkrtw_kube-system(a3e21a3e-2a59-405e-9af7-0f2a4a27ed62)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1007 12:48:46.308859 1605045 logs.go:138] Found kubelet problem: Oct 07 12:47:54 old-k8s-version-130031 kubelet[658]: E1007 12:47:54.374386     658 pod_workers.go:191] Error syncing pod 65909a33-e70d-4615-8ba7-4d6c9d9f6dd7 ("dashboard-metrics-scraper-8d5bb5db8-b54bs_kubernetes-dashboard(65909a33-e70d-4615-8ba7-4d6c9d9f6dd7)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-b54bs_kubernetes-dashboard(65909a33-e70d-4615-8ba7-4d6c9d9f6dd7)"
	W1007 12:48:46.309045 1605045 logs.go:138] Found kubelet problem: Oct 07 12:47:56 old-k8s-version-130031 kubelet[658]: E1007 12:47:56.373681     658 pod_workers.go:191] Error syncing pod a3e21a3e-2a59-405e-9af7-0f2a4a27ed62 ("metrics-server-9975d5f86-vkrtw_kube-system(a3e21a3e-2a59-405e-9af7-0f2a4a27ed62)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1007 12:48:46.309376 1605045 logs.go:138] Found kubelet problem: Oct 07 12:48:06 old-k8s-version-130031 kubelet[658]: E1007 12:48:06.373474     658 pod_workers.go:191] Error syncing pod 65909a33-e70d-4615-8ba7-4d6c9d9f6dd7 ("dashboard-metrics-scraper-8d5bb5db8-b54bs_kubernetes-dashboard(65909a33-e70d-4615-8ba7-4d6c9d9f6dd7)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-b54bs_kubernetes-dashboard(65909a33-e70d-4615-8ba7-4d6c9d9f6dd7)"
	W1007 12:48:46.309561 1605045 logs.go:138] Found kubelet problem: Oct 07 12:48:10 old-k8s-version-130031 kubelet[658]: E1007 12:48:10.374205     658 pod_workers.go:191] Error syncing pod a3e21a3e-2a59-405e-9af7-0f2a4a27ed62 ("metrics-server-9975d5f86-vkrtw_kube-system(a3e21a3e-2a59-405e-9af7-0f2a4a27ed62)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1007 12:48:46.309895 1605045 logs.go:138] Found kubelet problem: Oct 07 12:48:19 old-k8s-version-130031 kubelet[658]: E1007 12:48:19.373191     658 pod_workers.go:191] Error syncing pod 65909a33-e70d-4615-8ba7-4d6c9d9f6dd7 ("dashboard-metrics-scraper-8d5bb5db8-b54bs_kubernetes-dashboard(65909a33-e70d-4615-8ba7-4d6c9d9f6dd7)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-b54bs_kubernetes-dashboard(65909a33-e70d-4615-8ba7-4d6c9d9f6dd7)"
	W1007 12:48:46.310080 1605045 logs.go:138] Found kubelet problem: Oct 07 12:48:22 old-k8s-version-130031 kubelet[658]: E1007 12:48:22.377528     658 pod_workers.go:191] Error syncing pod a3e21a3e-2a59-405e-9af7-0f2a4a27ed62 ("metrics-server-9975d5f86-vkrtw_kube-system(a3e21a3e-2a59-405e-9af7-0f2a4a27ed62)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1007 12:48:46.310406 1605045 logs.go:138] Found kubelet problem: Oct 07 12:48:33 old-k8s-version-130031 kubelet[658]: E1007 12:48:33.373332     658 pod_workers.go:191] Error syncing pod 65909a33-e70d-4615-8ba7-4d6c9d9f6dd7 ("dashboard-metrics-scraper-8d5bb5db8-b54bs_kubernetes-dashboard(65909a33-e70d-4615-8ba7-4d6c9d9f6dd7)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-b54bs_kubernetes-dashboard(65909a33-e70d-4615-8ba7-4d6c9d9f6dd7)"
	W1007 12:48:46.310601 1605045 logs.go:138] Found kubelet problem: Oct 07 12:48:36 old-k8s-version-130031 kubelet[658]: E1007 12:48:36.373890     658 pod_workers.go:191] Error syncing pod a3e21a3e-2a59-405e-9af7-0f2a4a27ed62 ("metrics-server-9975d5f86-vkrtw_kube-system(a3e21a3e-2a59-405e-9af7-0f2a4a27ed62)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	I1007 12:48:46.310613 1605045 logs.go:123] Gathering logs for dmesg ...
	I1007 12:48:46.310628 1605045 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 12:48:46.330502 1605045 logs.go:123] Gathering logs for kube-apiserver [c560d5c1bf2a37b5133996a47b9fd87c6142a14ff27e284676c132938457dd1a] ...
	I1007 12:48:46.330530 1605045 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c560d5c1bf2a37b5133996a47b9fd87c6142a14ff27e284676c132938457dd1a"
	I1007 12:48:46.411082 1605045 logs.go:123] Gathering logs for etcd [1cda9d232b1a29681f59212c1398e49986b28471693e5b8ec3e1096d4b08664b] ...
	I1007 12:48:46.411116 1605045 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1cda9d232b1a29681f59212c1398e49986b28471693e5b8ec3e1096d4b08664b"
	I1007 12:48:46.461183 1605045 logs.go:123] Gathering logs for kindnet [8cf6219e2513777cac33b5c910a4793a59ef8cc60b43f7ab0383fcb48b0249b4] ...
	I1007 12:48:46.461211 1605045 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8cf6219e2513777cac33b5c910a4793a59ef8cc60b43f7ab0383fcb48b0249b4"
	I1007 12:48:46.529582 1605045 logs.go:123] Gathering logs for kube-apiserver [087491a883dc7a00d19cb0b6f1aa66d75a5fdf4285af40a17d54c064db2cfb38] ...
	I1007 12:48:46.529616 1605045 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 087491a883dc7a00d19cb0b6f1aa66d75a5fdf4285af40a17d54c064db2cfb38"
	I1007 12:48:46.591839 1605045 logs.go:123] Gathering logs for coredns [8805bd4c8f5e006eb0f0abb45a40c3d0696ac379978061faa21852c4370a056a] ...
	I1007 12:48:46.591876 1605045 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8805bd4c8f5e006eb0f0abb45a40c3d0696ac379978061faa21852c4370a056a"
	I1007 12:48:46.638143 1605045 logs.go:123] Gathering logs for kubernetes-dashboard [ee2b00422af4d052a720f617e5504040c0d7e9f2bff2f350b1c71f90cd16a1b0] ...
	I1007 12:48:46.638171 1605045 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ee2b00422af4d052a720f617e5504040c0d7e9f2bff2f350b1c71f90cd16a1b0"
	I1007 12:48:46.692586 1605045 logs.go:123] Gathering logs for storage-provisioner [4e279b29e878213ed1a543c131a7d9f90f5b59daeb83257e0bf1c6513ff53468] ...
	I1007 12:48:46.692624 1605045 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4e279b29e878213ed1a543c131a7d9f90f5b59daeb83257e0bf1c6513ff53468"
	I1007 12:48:46.731099 1605045 logs.go:123] Gathering logs for containerd ...
	I1007 12:48:46.731168 1605045 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1007 12:48:46.790385 1605045 logs.go:123] Gathering logs for container status ...
	I1007 12:48:46.790420 1605045 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 12:48:46.848918 1605045 logs.go:123] Gathering logs for kube-controller-manager [1242dd7b1b01beccce95d3398556baedca1c8a45e6c29f05bf39e03c50f5f0ca] ...
	I1007 12:48:46.848948 1605045 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1242dd7b1b01beccce95d3398556baedca1c8a45e6c29f05bf39e03c50f5f0ca"
	I1007 12:48:46.915621 1605045 logs.go:123] Gathering logs for kindnet [9e5fd41f9e9f5bb083bcc0a91b11894d2f6d83235dbe7b915428c7f7f5fe0fa1] ...
	I1007 12:48:46.915656 1605045 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9e5fd41f9e9f5bb083bcc0a91b11894d2f6d83235dbe7b915428c7f7f5fe0fa1"
	I1007 12:48:46.961161 1605045 logs.go:123] Gathering logs for storage-provisioner [a0fd0c4da3d885bfd3e984d4118d66727c79c42fb58e4d7d63b72a2a3ad14444] ...
	I1007 12:48:46.961190 1605045 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a0fd0c4da3d885bfd3e984d4118d66727c79c42fb58e4d7d63b72a2a3ad14444"
	I1007 12:48:46.998949 1605045 logs.go:123] Gathering logs for describe nodes ...
	I1007 12:48:46.999040 1605045 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1007 12:48:47.158864 1605045 logs.go:123] Gathering logs for etcd [1e0947c0d1d23430fec3b7273c7a245208911a9fd18651ee2ea30452df93bfdc] ...
	I1007 12:48:47.158897 1605045 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1e0947c0d1d23430fec3b7273c7a245208911a9fd18651ee2ea30452df93bfdc"
	I1007 12:48:47.207012 1605045 logs.go:123] Gathering logs for coredns [4bb2fa0793733df0e143059afecc82f6c14eff08bfc7bfdd197dd1f5722e7d55] ...
	I1007 12:48:47.207048 1605045 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4bb2fa0793733df0e143059afecc82f6c14eff08bfc7bfdd197dd1f5722e7d55"
	I1007 12:48:47.256247 1605045 logs.go:123] Gathering logs for kube-scheduler [56445c334390ae25dcd546cbbe4c491fdc1651b3a5a4d8815c363c7a8bfbb990] ...
	I1007 12:48:47.256273 1605045 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 56445c334390ae25dcd546cbbe4c491fdc1651b3a5a4d8815c363c7a8bfbb990"
	I1007 12:48:47.299388 1605045 out.go:358] Setting ErrFile to fd 2...
	I1007 12:48:47.299414 1605045 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1007 12:48:47.299539 1605045 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W1007 12:48:47.299555 1605045 out.go:270]   Oct 07 12:48:10 old-k8s-version-130031 kubelet[658]: E1007 12:48:10.374205     658 pod_workers.go:191] Error syncing pod a3e21a3e-2a59-405e-9af7-0f2a4a27ed62 ("metrics-server-9975d5f86-vkrtw_kube-system(a3e21a3e-2a59-405e-9af7-0f2a4a27ed62)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	  Oct 07 12:48:10 old-k8s-version-130031 kubelet[658]: E1007 12:48:10.374205     658 pod_workers.go:191] Error syncing pod a3e21a3e-2a59-405e-9af7-0f2a4a27ed62 ("metrics-server-9975d5f86-vkrtw_kube-system(a3e21a3e-2a59-405e-9af7-0f2a4a27ed62)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1007 12:48:47.299575 1605045 out.go:270]   Oct 07 12:48:19 old-k8s-version-130031 kubelet[658]: E1007 12:48:19.373191     658 pod_workers.go:191] Error syncing pod 65909a33-e70d-4615-8ba7-4d6c9d9f6dd7 ("dashboard-metrics-scraper-8d5bb5db8-b54bs_kubernetes-dashboard(65909a33-e70d-4615-8ba7-4d6c9d9f6dd7)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-b54bs_kubernetes-dashboard(65909a33-e70d-4615-8ba7-4d6c9d9f6dd7)"
	  Oct 07 12:48:19 old-k8s-version-130031 kubelet[658]: E1007 12:48:19.373191     658 pod_workers.go:191] Error syncing pod 65909a33-e70d-4615-8ba7-4d6c9d9f6dd7 ("dashboard-metrics-scraper-8d5bb5db8-b54bs_kubernetes-dashboard(65909a33-e70d-4615-8ba7-4d6c9d9f6dd7)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-b54bs_kubernetes-dashboard(65909a33-e70d-4615-8ba7-4d6c9d9f6dd7)"
	W1007 12:48:47.299582 1605045 out.go:270]   Oct 07 12:48:22 old-k8s-version-130031 kubelet[658]: E1007 12:48:22.377528     658 pod_workers.go:191] Error syncing pod a3e21a3e-2a59-405e-9af7-0f2a4a27ed62 ("metrics-server-9975d5f86-vkrtw_kube-system(a3e21a3e-2a59-405e-9af7-0f2a4a27ed62)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	  Oct 07 12:48:22 old-k8s-version-130031 kubelet[658]: E1007 12:48:22.377528     658 pod_workers.go:191] Error syncing pod a3e21a3e-2a59-405e-9af7-0f2a4a27ed62 ("metrics-server-9975d5f86-vkrtw_kube-system(a3e21a3e-2a59-405e-9af7-0f2a4a27ed62)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1007 12:48:47.299593 1605045 out.go:270]   Oct 07 12:48:33 old-k8s-version-130031 kubelet[658]: E1007 12:48:33.373332     658 pod_workers.go:191] Error syncing pod 65909a33-e70d-4615-8ba7-4d6c9d9f6dd7 ("dashboard-metrics-scraper-8d5bb5db8-b54bs_kubernetes-dashboard(65909a33-e70d-4615-8ba7-4d6c9d9f6dd7)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-b54bs_kubernetes-dashboard(65909a33-e70d-4615-8ba7-4d6c9d9f6dd7)"
	  Oct 07 12:48:33 old-k8s-version-130031 kubelet[658]: E1007 12:48:33.373332     658 pod_workers.go:191] Error syncing pod 65909a33-e70d-4615-8ba7-4d6c9d9f6dd7 ("dashboard-metrics-scraper-8d5bb5db8-b54bs_kubernetes-dashboard(65909a33-e70d-4615-8ba7-4d6c9d9f6dd7)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-b54bs_kubernetes-dashboard(65909a33-e70d-4615-8ba7-4d6c9d9f6dd7)"
	W1007 12:48:47.299601 1605045 out.go:270]   Oct 07 12:48:36 old-k8s-version-130031 kubelet[658]: E1007 12:48:36.373890     658 pod_workers.go:191] Error syncing pod a3e21a3e-2a59-405e-9af7-0f2a4a27ed62 ("metrics-server-9975d5f86-vkrtw_kube-system(a3e21a3e-2a59-405e-9af7-0f2a4a27ed62)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	  Oct 07 12:48:36 old-k8s-version-130031 kubelet[658]: E1007 12:48:36.373890     658 pod_workers.go:191] Error syncing pod a3e21a3e-2a59-405e-9af7-0f2a4a27ed62 ("metrics-server-9975d5f86-vkrtw_kube-system(a3e21a3e-2a59-405e-9af7-0f2a4a27ed62)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	I1007 12:48:47.299607 1605045 out.go:358] Setting ErrFile to fd 2...
	I1007 12:48:47.299620 1605045 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 12:48:57.301471 1605045 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1007 12:48:57.310517 1605045 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1007 12:48:57.313478 1605045 out.go:201] 
	W1007 12:48:57.316110 1605045 out.go:270] X Exiting due to K8S_UNHEALTHY_CONTROL_PLANE: wait 6m0s for node: wait for healthy API server: controlPlane never updated to v1.20.0
	X Exiting due to K8S_UNHEALTHY_CONTROL_PLANE: wait 6m0s for node: wait for healthy API server: controlPlane never updated to v1.20.0
	W1007 12:48:57.316147 1605045 out.go:270] * Suggestion: Control Plane could not update, try minikube delete --all --purge
	* Suggestion: Control Plane could not update, try minikube delete --all --purge
	W1007 12:48:57.316167 1605045 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/11417
	* Related issue: https://github.com/kubernetes/minikube/issues/11417
	W1007 12:48:57.316173 1605045 out.go:270] * 
	* 
	W1007 12:48:57.317058 1605045 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1007 12:48:57.320594 1605045 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-linux-arm64 start -p old-k8s-version-130031 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.20.0": exit status 102
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-130031
helpers_test.go:235: (dbg) docker inspect old-k8s-version-130031:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "d3a8db03c85c69801314f859386fae8475834e135b8fe3d1c777298bca21af8f",
	        "Created": "2024-10-07T12:40:28.46131901Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1605240,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-10-07T12:42:45.340486207Z",
	            "FinishedAt": "2024-10-07T12:42:44.334325975Z"
	        },
	        "Image": "sha256:b5f10d57944829de859b6363a7c57065ccc6b1805dabb3bce283aaecb83f3acc",
	        "ResolvConfPath": "/var/lib/docker/containers/d3a8db03c85c69801314f859386fae8475834e135b8fe3d1c777298bca21af8f/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/d3a8db03c85c69801314f859386fae8475834e135b8fe3d1c777298bca21af8f/hostname",
	        "HostsPath": "/var/lib/docker/containers/d3a8db03c85c69801314f859386fae8475834e135b8fe3d1c777298bca21af8f/hosts",
	        "LogPath": "/var/lib/docker/containers/d3a8db03c85c69801314f859386fae8475834e135b8fe3d1c777298bca21af8f/d3a8db03c85c69801314f859386fae8475834e135b8fe3d1c777298bca21af8f-json.log",
	        "Name": "/old-k8s-version-130031",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-130031:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-130031",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/a3ebbd7eb0c68f9426ce8c7ed635b29181f8a8daef19399c2375726a091d1236-init/diff:/var/lib/docker/overlay2/056f79e8a8729c0886964eb01f46792a83efc9c9ba3dec7e1dde1dce89315afa/diff",
	                "MergedDir": "/var/lib/docker/overlay2/a3ebbd7eb0c68f9426ce8c7ed635b29181f8a8daef19399c2375726a091d1236/merged",
	                "UpperDir": "/var/lib/docker/overlay2/a3ebbd7eb0c68f9426ce8c7ed635b29181f8a8daef19399c2375726a091d1236/diff",
	                "WorkDir": "/var/lib/docker/overlay2/a3ebbd7eb0c68f9426ce8c7ed635b29181f8a8daef19399c2375726a091d1236/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-130031",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-130031/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-130031",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-130031",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-130031",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "1320f87a33d647ae532069bac174ea4a51343c7375d288550a7c63c7e86a1ab7",
	            "SandboxKey": "/var/run/docker/netns/1320f87a33d6",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "38186"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "38187"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "38190"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "38188"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "38189"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-130031": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:55:02",
	                    "DriverOpts": null,
	                    "NetworkID": "041a6ff28b3213ed5b40a4aa9ea9bdff1098808b3c1382e674b251aa0c084106",
	                    "EndpointID": "161f573d2c6695e295e31f78bd7884d27943afdf7c025f0b9d060ccda0ed79ba",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-130031",
	                        "d3a8db03c85c"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-130031 -n old-k8s-version-130031
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-130031 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p old-k8s-version-130031 logs -n 25: (2.005376175s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/SecondStart logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p cilium-798986 sudo find                             | cilium-798986             | jenkins | v1.34.0 | 07 Oct 24 12:39 UTC |                     |
	|         | /etc/crio -type f -exec sh -c                          |                           |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                           |         |         |                     |                     |
	| ssh     | -p cilium-798986 sudo crio                             | cilium-798986             | jenkins | v1.34.0 | 07 Oct 24 12:39 UTC |                     |
	|         | config                                                 |                           |         |         |                     |                     |
	| delete  | -p cilium-798986                                       | cilium-798986             | jenkins | v1.34.0 | 07 Oct 24 12:39 UTC | 07 Oct 24 12:39 UTC |
	| start   | -p force-systemd-env-471819                            | force-systemd-env-471819  | jenkins | v1.34.0 | 07 Oct 24 12:39 UTC | 07 Oct 24 12:39 UTC |
	|         | --memory=2048                                          |                           |         |         |                     |                     |
	|         | --alsologtostderr                                      |                           |         |         |                     |                     |
	|         | -v=5 --driver=docker                                   |                           |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                           |         |         |                     |                     |
	| ssh     | force-systemd-flag-448988                              | force-systemd-flag-448988 | jenkins | v1.34.0 | 07 Oct 24 12:39 UTC | 07 Oct 24 12:39 UTC |
	|         | ssh cat                                                |                           |         |         |                     |                     |
	|         | /etc/containerd/config.toml                            |                           |         |         |                     |                     |
	| delete  | -p force-systemd-flag-448988                           | force-systemd-flag-448988 | jenkins | v1.34.0 | 07 Oct 24 12:39 UTC | 07 Oct 24 12:39 UTC |
	| start   | -p cert-expiration-914735                              | cert-expiration-914735    | jenkins | v1.34.0 | 07 Oct 24 12:39 UTC | 07 Oct 24 12:39 UTC |
	|         | --memory=2048                                          |                           |         |         |                     |                     |
	|         | --cert-expiration=3m                                   |                           |         |         |                     |                     |
	|         | --driver=docker                                        |                           |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                           |         |         |                     |                     |
	| ssh     | force-systemd-env-471819                               | force-systemd-env-471819  | jenkins | v1.34.0 | 07 Oct 24 12:39 UTC | 07 Oct 24 12:39 UTC |
	|         | ssh cat                                                |                           |         |         |                     |                     |
	|         | /etc/containerd/config.toml                            |                           |         |         |                     |                     |
	| delete  | -p force-systemd-env-471819                            | force-systemd-env-471819  | jenkins | v1.34.0 | 07 Oct 24 12:39 UTC | 07 Oct 24 12:39 UTC |
	| start   | -p cert-options-034457                                 | cert-options-034457       | jenkins | v1.34.0 | 07 Oct 24 12:39 UTC | 07 Oct 24 12:40 UTC |
	|         | --memory=2048                                          |                           |         |         |                     |                     |
	|         | --apiserver-ips=127.0.0.1                              |                           |         |         |                     |                     |
	|         | --apiserver-ips=192.168.15.15                          |                           |         |         |                     |                     |
	|         | --apiserver-names=localhost                            |                           |         |         |                     |                     |
	|         | --apiserver-names=www.google.com                       |                           |         |         |                     |                     |
	|         | --apiserver-port=8555                                  |                           |         |         |                     |                     |
	|         | --driver=docker                                        |                           |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                           |         |         |                     |                     |
	| ssh     | cert-options-034457 ssh                                | cert-options-034457       | jenkins | v1.34.0 | 07 Oct 24 12:40 UTC | 07 Oct 24 12:40 UTC |
	|         | openssl x509 -text -noout -in                          |                           |         |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt                  |                           |         |         |                     |                     |
	| ssh     | -p cert-options-034457 -- sudo                         | cert-options-034457       | jenkins | v1.34.0 | 07 Oct 24 12:40 UTC | 07 Oct 24 12:40 UTC |
	|         | cat /etc/kubernetes/admin.conf                         |                           |         |         |                     |                     |
	| delete  | -p cert-options-034457                                 | cert-options-034457       | jenkins | v1.34.0 | 07 Oct 24 12:40 UTC | 07 Oct 24 12:40 UTC |
	| start   | -p old-k8s-version-130031                              | old-k8s-version-130031    | jenkins | v1.34.0 | 07 Oct 24 12:40 UTC | 07 Oct 24 12:42 UTC |
	|         | --memory=2200                                          |                           |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                           |         |         |                     |                     |
	|         | --kvm-network=default                                  |                           |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                           |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                           |         |         |                     |                     |
	|         | --keep-context=false                                   |                           |         |         |                     |                     |
	|         | --driver=docker                                        |                           |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                           |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-130031        | old-k8s-version-130031    | jenkins | v1.34.0 | 07 Oct 24 12:42 UTC | 07 Oct 24 12:42 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                           |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                           |         |         |                     |                     |
	| stop    | -p old-k8s-version-130031                              | old-k8s-version-130031    | jenkins | v1.34.0 | 07 Oct 24 12:42 UTC | 07 Oct 24 12:42 UTC |
	|         | --alsologtostderr -v=3                                 |                           |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-130031             | old-k8s-version-130031    | jenkins | v1.34.0 | 07 Oct 24 12:42 UTC | 07 Oct 24 12:42 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                           |         |         |                     |                     |
	| start   | -p old-k8s-version-130031                              | old-k8s-version-130031    | jenkins | v1.34.0 | 07 Oct 24 12:42 UTC |                     |
	|         | --memory=2200                                          |                           |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                           |         |         |                     |                     |
	|         | --kvm-network=default                                  |                           |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                           |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                           |         |         |                     |                     |
	|         | --keep-context=false                                   |                           |         |         |                     |                     |
	|         | --driver=docker                                        |                           |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                           |         |         |                     |                     |
	| start   | -p cert-expiration-914735                              | cert-expiration-914735    | jenkins | v1.34.0 | 07 Oct 24 12:42 UTC | 07 Oct 24 12:43 UTC |
	|         | --memory=2048                                          |                           |         |         |                     |                     |
	|         | --cert-expiration=8760h                                |                           |         |         |                     |                     |
	|         | --driver=docker                                        |                           |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                           |         |         |                     |                     |
	| delete  | -p cert-expiration-914735                              | cert-expiration-914735    | jenkins | v1.34.0 | 07 Oct 24 12:43 UTC | 07 Oct 24 12:43 UTC |
	| start   | -p no-preload-842812                                   | no-preload-842812         | jenkins | v1.34.0 | 07 Oct 24 12:43 UTC | 07 Oct 24 12:44 UTC |
	|         | --memory=2200                                          |                           |         |         |                     |                     |
	|         | --alsologtostderr                                      |                           |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                           |         |         |                     |                     |
	|         | --driver=docker                                        |                           |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                           |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-842812             | no-preload-842812         | jenkins | v1.34.0 | 07 Oct 24 12:44 UTC | 07 Oct 24 12:44 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                           |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                           |         |         |                     |                     |
	| stop    | -p no-preload-842812                                   | no-preload-842812         | jenkins | v1.34.0 | 07 Oct 24 12:44 UTC | 07 Oct 24 12:44 UTC |
	|         | --alsologtostderr -v=3                                 |                           |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-842812                  | no-preload-842812         | jenkins | v1.34.0 | 07 Oct 24 12:44 UTC | 07 Oct 24 12:44 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                           |         |         |                     |                     |
	| start   | -p no-preload-842812                                   | no-preload-842812         | jenkins | v1.34.0 | 07 Oct 24 12:44 UTC |                     |
	|         | --memory=2200                                          |                           |         |         |                     |                     |
	|         | --alsologtostderr                                      |                           |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                           |         |         |                     |                     |
	|         | --driver=docker                                        |                           |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                           |         |         |                     |                     |
	|---------|--------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/07 12:44:37
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.23.0 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1007 12:44:37.438929 1613577 out.go:345] Setting OutFile to fd 1 ...
	I1007 12:44:37.439059 1613577 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 12:44:37.439070 1613577 out.go:358] Setting ErrFile to fd 2...
	I1007 12:44:37.439075 1613577 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 12:44:37.439316 1613577 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19763-1394934/.minikube/bin
	I1007 12:44:37.439747 1613577 out.go:352] Setting JSON to false
	I1007 12:44:37.440784 1613577 start.go:129] hostinfo: {"hostname":"ip-172-31-29-130","uptime":95229,"bootTime":1728209849,"procs":232,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I1007 12:44:37.440863 1613577 start.go:139] virtualization:  
	I1007 12:44:37.445526 1613577 out.go:177] * [no-preload-842812] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	I1007 12:44:37.448236 1613577 out.go:177]   - MINIKUBE_LOCATION=19763
	I1007 12:44:37.448322 1613577 notify.go:220] Checking for updates...
	I1007 12:44:37.453780 1613577 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1007 12:44:37.456283 1613577 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19763-1394934/kubeconfig
	I1007 12:44:37.459012 1613577 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19763-1394934/.minikube
	I1007 12:44:37.461733 1613577 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1007 12:44:37.464295 1613577 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1007 12:44:37.467401 1613577 config.go:182] Loaded profile config "no-preload-842812": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I1007 12:44:37.468003 1613577 driver.go:394] Setting default libvirt URI to qemu:///system
	I1007 12:44:37.499719 1613577 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I1007 12:44:37.499871 1613577 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1007 12:44:37.552806 1613577 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:40 OomKillDisable:true NGoroutines:52 SystemTime:2024-10-07 12:44:37.542496071 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I1007 12:44:37.552914 1613577 docker.go:318] overlay module found
	I1007 12:44:37.555616 1613577 out.go:177] * Using the docker driver based on existing profile
	I1007 12:44:37.558118 1613577 start.go:297] selected driver: docker
	I1007 12:44:37.558134 1613577 start.go:901] validating driver "docker" against &{Name:no-preload-842812 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:no-preload-842812 Namespace:default APIServerHAVIP: APIServerN
ame:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false Moun
tString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1007 12:44:37.558239 1613577 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1007 12:44:37.558943 1613577 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1007 12:44:37.614823 1613577 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:40 OomKillDisable:true NGoroutines:52 SystemTime:2024-10-07 12:44:37.605063448 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I1007 12:44:37.615242 1613577 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1007 12:44:37.615261 1613577 cni.go:84] Creating CNI manager for ""
	I1007 12:44:37.615396 1613577 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1007 12:44:37.615467 1613577 start.go:340] cluster config:
	{Name:no-preload-842812 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:no-preload-842812 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1007 12:44:37.620059 1613577 out.go:177] * Starting "no-preload-842812" primary control-plane node in "no-preload-842812" cluster
	I1007 12:44:37.622509 1613577 cache.go:121] Beginning downloading kic base image for docker with containerd
	I1007 12:44:37.625148 1613577 out.go:177] * Pulling base image v0.0.45-1727731891-master ...
	I1007 12:44:37.627641 1613577 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime containerd
	I1007 12:44:37.627737 1613577 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 in local docker daemon
	I1007 12:44:37.627782 1613577 profile.go:143] Saving config to /home/jenkins/minikube-integration/19763-1394934/.minikube/profiles/no-preload-842812/config.json ...
	I1007 12:44:37.628078 1613577 cache.go:107] acquiring lock: {Name:mk8d87139f3575a4c6c4873528f48712e9fad321 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1007 12:44:37.628165 1613577 cache.go:115] /home/jenkins/minikube-integration/19763-1394934/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1007 12:44:37.628179 1613577 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/19763-1394934/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 106.353µs
	I1007 12:44:37.628193 1613577 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/19763-1394934/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1007 12:44:37.628208 1613577 cache.go:107] acquiring lock: {Name:mk560b2988e901d4931c1dc3f51096b21a79d8f4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1007 12:44:37.628245 1613577 cache.go:115] /home/jenkins/minikube-integration/19763-1394934/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.1 exists
	I1007 12:44:37.628255 1613577 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.31.1" -> "/home/jenkins/minikube-integration/19763-1394934/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.1" took 48.713µs
	I1007 12:44:37.628261 1613577 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.31.1 -> /home/jenkins/minikube-integration/19763-1394934/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.1 succeeded
	I1007 12:44:37.628271 1613577 cache.go:107] acquiring lock: {Name:mk7498ebac0aea844435b0260cd1d9f276bd96df Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1007 12:44:37.628302 1613577 cache.go:115] /home/jenkins/minikube-integration/19763-1394934/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.1 exists
	I1007 12:44:37.628311 1613577 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.31.1" -> "/home/jenkins/minikube-integration/19763-1394934/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.1" took 40.639µs
	I1007 12:44:37.628318 1613577 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.31.1 -> /home/jenkins/minikube-integration/19763-1394934/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.1 succeeded
	I1007 12:44:37.628327 1613577 cache.go:107] acquiring lock: {Name:mkbda6554c9ef51870c70c2b60d75d89d72b015d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1007 12:44:37.628359 1613577 cache.go:115] /home/jenkins/minikube-integration/19763-1394934/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.1 exists
	I1007 12:44:37.628368 1613577 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.31.1" -> "/home/jenkins/minikube-integration/19763-1394934/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.1" took 42.534µs
	I1007 12:44:37.628375 1613577 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.31.1 -> /home/jenkins/minikube-integration/19763-1394934/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.1 succeeded
	I1007 12:44:37.628385 1613577 cache.go:107] acquiring lock: {Name:mk1e6c01e81a95b48e8099c6100b03cd1a93f9f7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1007 12:44:37.628416 1613577 cache.go:115] /home/jenkins/minikube-integration/19763-1394934/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.1 exists
	I1007 12:44:37.628426 1613577 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.31.1" -> "/home/jenkins/minikube-integration/19763-1394934/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.1" took 41.516µs
	I1007 12:44:37.628432 1613577 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.31.1 -> /home/jenkins/minikube-integration/19763-1394934/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.1 succeeded
	I1007 12:44:37.628444 1613577 cache.go:107] acquiring lock: {Name:mk0674a086de79b5bc51117b2ebb2bd0e1c1d1e9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1007 12:44:37.628480 1613577 cache.go:115] /home/jenkins/minikube-integration/19763-1394934/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10 exists
	I1007 12:44:37.628502 1613577 cache.go:96] cache image "registry.k8s.io/pause:3.10" -> "/home/jenkins/minikube-integration/19763-1394934/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10" took 47.49µs
	I1007 12:44:37.628512 1613577 cache.go:80] save to tar file registry.k8s.io/pause:3.10 -> /home/jenkins/minikube-integration/19763-1394934/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10 succeeded
	I1007 12:44:37.628522 1613577 cache.go:107] acquiring lock: {Name:mk92ba7dd7de0421f52bf8d3ba7c5cb5e1a29566 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1007 12:44:37.628553 1613577 cache.go:115] /home/jenkins/minikube-integration/19763-1394934/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0 exists
	I1007 12:44:37.628561 1613577 cache.go:96] cache image "registry.k8s.io/etcd:3.5.15-0" -> "/home/jenkins/minikube-integration/19763-1394934/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0" took 40.68µs
	I1007 12:44:37.628567 1613577 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.15-0 -> /home/jenkins/minikube-integration/19763-1394934/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0 succeeded
	I1007 12:44:37.628576 1613577 cache.go:107] acquiring lock: {Name:mk8c71fbaa37bacb56686f43dd8b93d643e75b1e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1007 12:44:37.628606 1613577 cache.go:115] /home/jenkins/minikube-integration/19763-1394934/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.3 exists
	I1007 12:44:37.628615 1613577 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.11.3" -> "/home/jenkins/minikube-integration/19763-1394934/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.3" took 39.852µs
	I1007 12:44:37.628622 1613577 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.11.3 -> /home/jenkins/minikube-integration/19763-1394934/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.3 succeeded
	I1007 12:44:37.628628 1613577 cache.go:87] Successfully saved all images to host disk.
	I1007 12:44:37.647486 1613577 image.go:98] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 in local docker daemon, skipping pull
	I1007 12:44:37.647508 1613577 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 exists in daemon, skipping load
	I1007 12:44:37.647569 1613577 cache.go:194] Successfully downloaded all kic artifacts
	I1007 12:44:37.647594 1613577 start.go:360] acquireMachinesLock for no-preload-842812: {Name:mk9eef8a09c33cdd8adeca5dee73e6d6fb9261a3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1007 12:44:37.647660 1613577 start.go:364] duration metric: took 45.694µs to acquireMachinesLock for "no-preload-842812"
	I1007 12:44:37.647684 1613577 start.go:96] Skipping create...Using existing machine configuration
	I1007 12:44:37.647691 1613577 fix.go:54] fixHost starting: 
	I1007 12:44:37.647936 1613577 cli_runner.go:164] Run: docker container inspect no-preload-842812 --format={{.State.Status}}
	I1007 12:44:37.664055 1613577 fix.go:112] recreateIfNeeded on no-preload-842812: state=Stopped err=<nil>
	W1007 12:44:37.664101 1613577 fix.go:138] unexpected machine state, will restart: <nil>
	I1007 12:44:37.668961 1613577 out.go:177] * Restarting existing docker container for "no-preload-842812" ...
	I1007 12:44:35.697303 1605045 pod_ready.go:103] pod "metrics-server-9975d5f86-vkrtw" in "kube-system" namespace has status "Ready":"False"
	I1007 12:44:37.700013 1605045 pod_ready.go:103] pod "metrics-server-9975d5f86-vkrtw" in "kube-system" namespace has status "Ready":"False"
	I1007 12:44:37.671607 1613577 cli_runner.go:164] Run: docker start no-preload-842812
	I1007 12:44:38.035289 1613577 cli_runner.go:164] Run: docker container inspect no-preload-842812 --format={{.State.Status}}
	I1007 12:44:38.059839 1613577 kic.go:430] container "no-preload-842812" state is running.
	I1007 12:44:38.060316 1613577 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-842812
	I1007 12:44:38.084345 1613577 profile.go:143] Saving config to /home/jenkins/minikube-integration/19763-1394934/.minikube/profiles/no-preload-842812/config.json ...
	I1007 12:44:38.084585 1613577 machine.go:93] provisionDockerMachine start ...
	I1007 12:44:38.084656 1613577 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-842812
	I1007 12:44:38.106135 1613577 main.go:141] libmachine: Using SSH client type: native
	I1007 12:44:38.106391 1613577 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413c00] 0x416440 <nil>  [] 0s} 127.0.0.1 38196 <nil> <nil>}
	I1007 12:44:38.106400 1613577 main.go:141] libmachine: About to run SSH command:
	hostname
	I1007 12:44:38.106984 1613577 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:33910->127.0.0.1:38196: read: connection reset by peer
	I1007 12:44:41.247152 1613577 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-842812
	
	I1007 12:44:41.247222 1613577 ubuntu.go:169] provisioning hostname "no-preload-842812"
	I1007 12:44:41.247295 1613577 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-842812
	I1007 12:44:41.279575 1613577 main.go:141] libmachine: Using SSH client type: native
	I1007 12:44:41.279827 1613577 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413c00] 0x416440 <nil>  [] 0s} 127.0.0.1 38196 <nil> <nil>}
	I1007 12:44:41.279840 1613577 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-842812 && echo "no-preload-842812" | sudo tee /etc/hostname
	I1007 12:44:41.430934 1613577 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-842812
	
	I1007 12:44:41.431039 1613577 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-842812
	I1007 12:44:41.448988 1613577 main.go:141] libmachine: Using SSH client type: native
	I1007 12:44:41.449264 1613577 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413c00] 0x416440 <nil>  [] 0s} 127.0.0.1 38196 <nil> <nil>}
	I1007 12:44:41.449287 1613577 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-842812' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-842812/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-842812' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1007 12:44:41.583393 1613577 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1007 12:44:41.583418 1613577 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19763-1394934/.minikube CaCertPath:/home/jenkins/minikube-integration/19763-1394934/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19763-1394934/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19763-1394934/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19763-1394934/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19763-1394934/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19763-1394934/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19763-1394934/.minikube}
	I1007 12:44:41.583446 1613577 ubuntu.go:177] setting up certificates
	I1007 12:44:41.583456 1613577 provision.go:84] configureAuth start
	I1007 12:44:41.583522 1613577 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-842812
	I1007 12:44:41.600249 1613577 provision.go:143] copyHostCerts
	I1007 12:44:41.600336 1613577 exec_runner.go:144] found /home/jenkins/minikube-integration/19763-1394934/.minikube/ca.pem, removing ...
	I1007 12:44:41.600349 1613577 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19763-1394934/.minikube/ca.pem
	I1007 12:44:41.600423 1613577 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19763-1394934/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19763-1394934/.minikube/ca.pem (1078 bytes)
	I1007 12:44:41.600532 1613577 exec_runner.go:144] found /home/jenkins/minikube-integration/19763-1394934/.minikube/cert.pem, removing ...
	I1007 12:44:41.600543 1613577 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19763-1394934/.minikube/cert.pem
	I1007 12:44:41.600573 1613577 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19763-1394934/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19763-1394934/.minikube/cert.pem (1123 bytes)
	I1007 12:44:41.600637 1613577 exec_runner.go:144] found /home/jenkins/minikube-integration/19763-1394934/.minikube/key.pem, removing ...
	I1007 12:44:41.600646 1613577 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19763-1394934/.minikube/key.pem
	I1007 12:44:41.600670 1613577 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19763-1394934/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19763-1394934/.minikube/key.pem (1675 bytes)
	I1007 12:44:41.600723 1613577 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19763-1394934/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19763-1394934/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19763-1394934/.minikube/certs/ca-key.pem org=jenkins.no-preload-842812 san=[127.0.0.1 192.168.76.2 localhost minikube no-preload-842812]
	I1007 12:44:42.119245 1613577 provision.go:177] copyRemoteCerts
	I1007 12:44:42.119374 1613577 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1007 12:44:42.119447 1613577 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-842812
	I1007 12:44:42.138702 1613577 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38196 SSHKeyPath:/home/jenkins/minikube-integration/19763-1394934/.minikube/machines/no-preload-842812/id_rsa Username:docker}
	I1007 12:44:42.238421 1613577 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-1394934/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1007 12:44:42.265828 1613577 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-1394934/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1007 12:44:42.294410 1613577 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-1394934/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1007 12:44:42.322969 1613577 provision.go:87] duration metric: took 739.497439ms to configureAuth
	I1007 12:44:42.322999 1613577 ubuntu.go:193] setting minikube options for container-runtime
	I1007 12:44:42.323225 1613577 config.go:182] Loaded profile config "no-preload-842812": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I1007 12:44:42.323235 1613577 machine.go:96] duration metric: took 4.238642817s to provisionDockerMachine
	I1007 12:44:42.323244 1613577 start.go:293] postStartSetup for "no-preload-842812" (driver="docker")
	I1007 12:44:42.323261 1613577 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1007 12:44:42.323321 1613577 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1007 12:44:42.323376 1613577 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-842812
	I1007 12:44:42.341896 1613577 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38196 SSHKeyPath:/home/jenkins/minikube-integration/19763-1394934/.minikube/machines/no-preload-842812/id_rsa Username:docker}
	I1007 12:44:42.440911 1613577 ssh_runner.go:195] Run: cat /etc/os-release
	I1007 12:44:42.444166 1613577 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1007 12:44:42.444201 1613577 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1007 12:44:42.444212 1613577 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1007 12:44:42.444219 1613577 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I1007 12:44:42.444230 1613577 filesync.go:126] Scanning /home/jenkins/minikube-integration/19763-1394934/.minikube/addons for local assets ...
	I1007 12:44:42.444284 1613577 filesync.go:126] Scanning /home/jenkins/minikube-integration/19763-1394934/.minikube/files for local assets ...
	I1007 12:44:42.444365 1613577 filesync.go:149] local asset: /home/jenkins/minikube-integration/19763-1394934/.minikube/files/etc/ssl/certs/14003082.pem -> 14003082.pem in /etc/ssl/certs
	I1007 12:44:42.444471 1613577 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1007 12:44:42.452884 1613577 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-1394934/.minikube/files/etc/ssl/certs/14003082.pem --> /etc/ssl/certs/14003082.pem (1708 bytes)
	I1007 12:44:42.480044 1613577 start.go:296] duration metric: took 156.778421ms for postStartSetup
	I1007 12:44:42.480132 1613577 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1007 12:44:42.480176 1613577 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-842812
	I1007 12:44:42.496977 1613577 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38196 SSHKeyPath:/home/jenkins/minikube-integration/19763-1394934/.minikube/machines/no-preload-842812/id_rsa Username:docker}
	I1007 12:44:42.588517 1613577 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1007 12:44:42.593431 1613577 fix.go:56] duration metric: took 4.945733096s for fixHost
	I1007 12:44:42.593461 1613577 start.go:83] releasing machines lock for "no-preload-842812", held for 4.94578798s
	I1007 12:44:42.593529 1613577 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-842812
	I1007 12:44:42.609829 1613577 ssh_runner.go:195] Run: cat /version.json
	I1007 12:44:42.609892 1613577 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-842812
	I1007 12:44:42.609891 1613577 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1007 12:44:42.610182 1613577 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-842812
	I1007 12:44:42.630339 1613577 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38196 SSHKeyPath:/home/jenkins/minikube-integration/19763-1394934/.minikube/machines/no-preload-842812/id_rsa Username:docker}
	I1007 12:44:42.639473 1613577 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38196 SSHKeyPath:/home/jenkins/minikube-integration/19763-1394934/.minikube/machines/no-preload-842812/id_rsa Username:docker}
	I1007 12:44:42.865901 1613577 ssh_runner.go:195] Run: systemctl --version
	I1007 12:44:42.870271 1613577 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1007 12:44:42.874713 1613577 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I1007 12:44:42.893177 1613577 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I1007 12:44:42.893268 1613577 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1007 12:44:42.902850 1613577 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1007 12:44:42.902882 1613577 start.go:495] detecting cgroup driver to use...
	I1007 12:44:42.902937 1613577 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1007 12:44:42.903026 1613577 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1007 12:44:42.921029 1613577 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1007 12:44:42.933169 1613577 docker.go:217] disabling cri-docker service (if available) ...
	I1007 12:44:42.933291 1613577 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1007 12:44:42.946359 1613577 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1007 12:44:42.958415 1613577 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1007 12:44:43.046423 1613577 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1007 12:44:43.147753 1613577 docker.go:233] disabling docker service ...
	I1007 12:44:43.147823 1613577 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1007 12:44:43.160651 1613577 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1007 12:44:43.172931 1613577 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1007 12:44:43.271871 1613577 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1007 12:44:43.357039 1613577 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1007 12:44:43.368731 1613577 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1007 12:44:43.385537 1613577 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I1007 12:44:43.396109 1613577 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1007 12:44:43.406910 1613577 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1007 12:44:43.406985 1613577 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1007 12:44:43.416912 1613577 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1007 12:44:43.426682 1613577 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1007 12:44:43.436759 1613577 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1007 12:44:43.447068 1613577 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1007 12:44:43.456533 1613577 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1007 12:44:43.466836 1613577 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1007 12:44:43.476756 1613577 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1007 12:44:43.486831 1613577 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1007 12:44:43.496216 1613577 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1007 12:44:43.504903 1613577 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1007 12:44:43.590767 1613577 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1007 12:44:43.764881 1613577 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I1007 12:44:43.764952 1613577 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1007 12:44:43.768753 1613577 start.go:563] Will wait 60s for crictl version
	I1007 12:44:43.768874 1613577 ssh_runner.go:195] Run: which crictl
	I1007 12:44:43.772347 1613577 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1007 12:44:43.813186 1613577 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.7.22
	RuntimeApiVersion:  v1
	I1007 12:44:43.813254 1613577 ssh_runner.go:195] Run: containerd --version
	I1007 12:44:43.835806 1613577 ssh_runner.go:195] Run: containerd --version
	I1007 12:44:43.863584 1613577 out.go:177] * Preparing Kubernetes v1.31.1 on containerd 1.7.22 ...
	I1007 12:44:43.866235 1613577 cli_runner.go:164] Run: docker network inspect no-preload-842812 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1007 12:44:43.882550 1613577 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1007 12:44:43.886388 1613577 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1007 12:44:43.898360 1613577 kubeadm.go:883] updating cluster {Name:no-preload-842812 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:no-preload-842812 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jen
kins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1007 12:44:43.898500 1613577 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime containerd
	I1007 12:44:43.898589 1613577 ssh_runner.go:195] Run: sudo crictl images --output json
	I1007 12:44:43.939464 1613577 containerd.go:627] all images are preloaded for containerd runtime.
	I1007 12:44:43.939489 1613577 cache_images.go:84] Images are preloaded, skipping loading
	I1007 12:44:43.939497 1613577 kubeadm.go:934] updating node { 192.168.76.2 8443 v1.31.1 containerd true true} ...
	I1007 12:44:43.939669 1613577 kubeadm.go:946] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-842812 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:no-preload-842812 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1007 12:44:43.939743 1613577 ssh_runner.go:195] Run: sudo crictl info
	I1007 12:44:43.989764 1613577 cni.go:84] Creating CNI manager for ""
	I1007 12:44:43.989793 1613577 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1007 12:44:43.989803 1613577 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1007 12:44:43.989827 1613577 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-842812 NodeName:no-preload-842812 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1007 12:44:43.989975 1613577 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "no-preload-842812"
	  kubeletExtraArgs:
	    node-ip: 192.168.76.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1007 12:44:43.990082 1613577 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1007 12:44:44.015985 1613577 binaries.go:44] Found k8s binaries, skipping transfer
	I1007 12:44:44.016112 1613577 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1007 12:44:44.025434 1613577 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (321 bytes)
	I1007 12:44:44.044576 1613577 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1007 12:44:44.063807 1613577 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2171 bytes)
	I1007 12:44:44.084757 1613577 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1007 12:44:44.088458 1613577 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1007 12:44:44.100901 1613577 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1007 12:44:44.199752 1613577 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1007 12:44:44.224593 1613577 certs.go:68] Setting up /home/jenkins/minikube-integration/19763-1394934/.minikube/profiles/no-preload-842812 for IP: 192.168.76.2
	I1007 12:44:44.224661 1613577 certs.go:194] generating shared ca certs ...
	I1007 12:44:44.224692 1613577 certs.go:226] acquiring lock for ca certs: {Name:mk4964dcb525e1a3c94069cf2fb52c246bc0ce74 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 12:44:44.224875 1613577 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19763-1394934/.minikube/ca.key
	I1007 12:44:44.224941 1613577 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19763-1394934/.minikube/proxy-client-ca.key
	I1007 12:44:44.224962 1613577 certs.go:256] generating profile certs ...
	I1007 12:44:44.225085 1613577 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19763-1394934/.minikube/profiles/no-preload-842812/client.key
	I1007 12:44:44.225175 1613577 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19763-1394934/.minikube/profiles/no-preload-842812/apiserver.key.f0d822dc
	I1007 12:44:44.225255 1613577 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19763-1394934/.minikube/profiles/no-preload-842812/proxy-client.key
	I1007 12:44:44.225398 1613577 certs.go:484] found cert: /home/jenkins/minikube-integration/19763-1394934/.minikube/certs/1400308.pem (1338 bytes)
	W1007 12:44:44.225455 1613577 certs.go:480] ignoring /home/jenkins/minikube-integration/19763-1394934/.minikube/certs/1400308_empty.pem, impossibly tiny 0 bytes
	I1007 12:44:44.225479 1613577 certs.go:484] found cert: /home/jenkins/minikube-integration/19763-1394934/.minikube/certs/ca-key.pem (1679 bytes)
	I1007 12:44:44.225535 1613577 certs.go:484] found cert: /home/jenkins/minikube-integration/19763-1394934/.minikube/certs/ca.pem (1078 bytes)
	I1007 12:44:44.225588 1613577 certs.go:484] found cert: /home/jenkins/minikube-integration/19763-1394934/.minikube/certs/cert.pem (1123 bytes)
	I1007 12:44:44.225642 1613577 certs.go:484] found cert: /home/jenkins/minikube-integration/19763-1394934/.minikube/certs/key.pem (1675 bytes)
	I1007 12:44:44.225714 1613577 certs.go:484] found cert: /home/jenkins/minikube-integration/19763-1394934/.minikube/files/etc/ssl/certs/14003082.pem (1708 bytes)
	I1007 12:44:44.226428 1613577 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-1394934/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1007 12:44:44.258941 1613577 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-1394934/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1007 12:44:44.291067 1613577 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-1394934/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1007 12:44:44.315266 1613577 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-1394934/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1007 12:44:44.340808 1613577 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-1394934/.minikube/profiles/no-preload-842812/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1007 12:44:44.424492 1613577 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-1394934/.minikube/profiles/no-preload-842812/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1007 12:44:44.466373 1613577 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-1394934/.minikube/profiles/no-preload-842812/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1007 12:44:44.500865 1613577 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-1394934/.minikube/profiles/no-preload-842812/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1007 12:44:44.532679 1613577 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-1394934/.minikube/files/etc/ssl/certs/14003082.pem --> /usr/share/ca-certificates/14003082.pem (1708 bytes)
	I1007 12:44:44.560646 1613577 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-1394934/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1007 12:44:44.591003 1613577 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-1394934/.minikube/certs/1400308.pem --> /usr/share/ca-certificates/1400308.pem (1338 bytes)
	I1007 12:44:44.619609 1613577 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1007 12:44:44.641081 1613577 ssh_runner.go:195] Run: openssl version
	I1007 12:44:44.647917 1613577 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14003082.pem && ln -fs /usr/share/ca-certificates/14003082.pem /etc/ssl/certs/14003082.pem"
	I1007 12:44:44.658330 1613577 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14003082.pem
	I1007 12:44:44.662002 1613577 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  7 12:04 /usr/share/ca-certificates/14003082.pem
	I1007 12:44:44.662071 1613577 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14003082.pem
	I1007 12:44:44.669236 1613577 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/14003082.pem /etc/ssl/certs/3ec20f2e.0"
	I1007 12:44:44.679053 1613577 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1007 12:44:44.688780 1613577 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1007 12:44:44.692786 1613577 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  7 11:53 /usr/share/ca-certificates/minikubeCA.pem
	I1007 12:44:44.692893 1613577 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1007 12:44:44.701990 1613577 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1007 12:44:44.711102 1613577 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1400308.pem && ln -fs /usr/share/ca-certificates/1400308.pem /etc/ssl/certs/1400308.pem"
	I1007 12:44:44.721041 1613577 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1400308.pem
	I1007 12:44:44.724731 1613577 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  7 12:04 /usr/share/ca-certificates/1400308.pem
	I1007 12:44:44.724797 1613577 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1400308.pem
	I1007 12:44:44.732298 1613577 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1400308.pem /etc/ssl/certs/51391683.0"
	I1007 12:44:44.741570 1613577 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1007 12:44:44.745426 1613577 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1007 12:44:44.752518 1613577 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1007 12:44:44.759456 1613577 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1007 12:44:44.766607 1613577 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1007 12:44:44.773527 1613577 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1007 12:44:44.780809 1613577 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1007 12:44:44.787988 1613577 kubeadm.go:392] StartCluster: {Name:no-preload-842812 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:no-preload-842812 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkin
s:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1007 12:44:44.788092 1613577 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1007 12:44:44.788189 1613577 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1007 12:44:44.826300 1613577 cri.go:89] found id: "814f75c77ac0040ccc4d3c81eb9529e4e5408b249a4158974ebf04ffa190b820"
	I1007 12:44:44.826325 1613577 cri.go:89] found id: "36f089345e3539eecfa5eb7210487b87e8338c72f7ff12da89e3ba1d68bba809"
	I1007 12:44:44.826330 1613577 cri.go:89] found id: "68efed76c1e4505bfebe7ed981a4a1cb767417bcc9c1de62f0086de0f980496e"
	I1007 12:44:44.826343 1613577 cri.go:89] found id: "17d55b0482e3212552be8bfecc548d5f66bf02d99a99d14de6f259c635f0766f"
	I1007 12:44:44.826348 1613577 cri.go:89] found id: "4403ba1f0a9b607da97bd56246a70cfa89acacd6d17f0611bbd2692cb85de1a1"
	I1007 12:44:44.826369 1613577 cri.go:89] found id: "b04eb391a33be20bec0f778fb1c3b542e5bd3d1353d6c6e1e9f4f0e2a033eb98"
	I1007 12:44:44.826380 1613577 cri.go:89] found id: "c37386ed89b9d61b1b82675adfdf8bccd71bb807911cfdeb4dd070e5bec7775e"
	I1007 12:44:44.826385 1613577 cri.go:89] found id: "18005b336828432fa03ab428eda3e60c9f36d75d9538ca9f8d27e70dc1fe8a8d"
	I1007 12:44:44.826388 1613577 cri.go:89] found id: ""
	I1007 12:44:44.826459 1613577 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I1007 12:44:44.839563 1613577 cri.go:116] JSON = null
	W1007 12:44:44.839703 1613577 kubeadm.go:399] unpause failed: list paused: list returned 0 containers, but ps returned 8
	I1007 12:44:44.839770 1613577 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1007 12:44:44.849057 1613577 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1007 12:44:44.849079 1613577 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1007 12:44:44.849150 1613577 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1007 12:44:44.858839 1613577 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1007 12:44:44.859432 1613577 kubeconfig.go:47] verify endpoint returned: get endpoint: "no-preload-842812" does not appear in /home/jenkins/minikube-integration/19763-1394934/kubeconfig
	I1007 12:44:44.859861 1613577 kubeconfig.go:62] /home/jenkins/minikube-integration/19763-1394934/kubeconfig needs updating (will repair): [kubeconfig missing "no-preload-842812" cluster setting kubeconfig missing "no-preload-842812" context setting]
	I1007 12:44:44.860305 1613577 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19763-1394934/kubeconfig: {Name:mkef6c987beefaa5e568c1a78e7d094f26b41d37 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 12:44:44.861664 1613577 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1007 12:44:44.871381 1613577 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.76.2
	I1007 12:44:44.871459 1613577 kubeadm.go:597] duration metric: took 22.372889ms to restartPrimaryControlPlane
	I1007 12:44:44.871482 1613577 kubeadm.go:394] duration metric: took 83.502085ms to StartCluster
	I1007 12:44:44.871503 1613577 settings.go:142] acquiring lock: {Name:mk92e55c8b3391b1d94595f100e47ff9f6bf1d13 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 12:44:44.871615 1613577 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19763-1394934/kubeconfig
	I1007 12:44:44.872585 1613577 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19763-1394934/kubeconfig: {Name:mkef6c987beefaa5e568c1a78e7d094f26b41d37 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 12:44:44.872790 1613577 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1007 12:44:40.199665 1605045 pod_ready.go:103] pod "metrics-server-9975d5f86-vkrtw" in "kube-system" namespace has status "Ready":"False"
	I1007 12:44:42.697609 1605045 pod_ready.go:103] pod "metrics-server-9975d5f86-vkrtw" in "kube-system" namespace has status "Ready":"False"
	I1007 12:44:44.699302 1605045 pod_ready.go:103] pod "metrics-server-9975d5f86-vkrtw" in "kube-system" namespace has status "Ready":"False"
	I1007 12:44:44.873066 1613577 config.go:182] Loaded profile config "no-preload-842812": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I1007 12:44:44.873123 1613577 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1007 12:44:44.873188 1613577 addons.go:69] Setting storage-provisioner=true in profile "no-preload-842812"
	I1007 12:44:44.873202 1613577 addons.go:234] Setting addon storage-provisioner=true in "no-preload-842812"
	W1007 12:44:44.873208 1613577 addons.go:243] addon storage-provisioner should already be in state true
	I1007 12:44:44.873232 1613577 host.go:66] Checking if "no-preload-842812" exists ...
	I1007 12:44:44.873324 1613577 addons.go:69] Setting default-storageclass=true in profile "no-preload-842812"
	I1007 12:44:44.873346 1613577 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-842812"
	I1007 12:44:44.873634 1613577 cli_runner.go:164] Run: docker container inspect no-preload-842812 --format={{.State.Status}}
	I1007 12:44:44.873756 1613577 cli_runner.go:164] Run: docker container inspect no-preload-842812 --format={{.State.Status}}
	I1007 12:44:44.874268 1613577 addons.go:69] Setting metrics-server=true in profile "no-preload-842812"
	I1007 12:44:44.874287 1613577 addons.go:234] Setting addon metrics-server=true in "no-preload-842812"
	W1007 12:44:44.874294 1613577 addons.go:243] addon metrics-server should already be in state true
	I1007 12:44:44.874319 1613577 host.go:66] Checking if "no-preload-842812" exists ...
	I1007 12:44:44.874726 1613577 cli_runner.go:164] Run: docker container inspect no-preload-842812 --format={{.State.Status}}
	I1007 12:44:44.876608 1613577 addons.go:69] Setting dashboard=true in profile "no-preload-842812"
	I1007 12:44:44.876636 1613577 addons.go:234] Setting addon dashboard=true in "no-preload-842812"
	W1007 12:44:44.876741 1613577 addons.go:243] addon dashboard should already be in state true
	I1007 12:44:44.876783 1613577 host.go:66] Checking if "no-preload-842812" exists ...
	I1007 12:44:44.877354 1613577 out.go:177] * Verifying Kubernetes components...
	I1007 12:44:44.877923 1613577 cli_runner.go:164] Run: docker container inspect no-preload-842812 --format={{.State.Status}}
	I1007 12:44:44.880728 1613577 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1007 12:44:44.917416 1613577 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1007 12:44:44.921583 1613577 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1007 12:44:44.921616 1613577 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1007 12:44:44.921684 1613577 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-842812
	I1007 12:44:44.944867 1613577 addons.go:234] Setting addon default-storageclass=true in "no-preload-842812"
	W1007 12:44:44.944899 1613577 addons.go:243] addon default-storageclass should already be in state true
	I1007 12:44:44.944926 1613577 host.go:66] Checking if "no-preload-842812" exists ...
	I1007 12:44:44.945346 1613577 cli_runner.go:164] Run: docker container inspect no-preload-842812 --format={{.State.Status}}
	I1007 12:44:44.945525 1613577 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1007 12:44:44.951281 1613577 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I1007 12:44:44.954200 1613577 addons.go:431] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1007 12:44:44.954225 1613577 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1007 12:44:44.954290 1613577 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-842812
	I1007 12:44:44.983667 1613577 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1007 12:44:44.987181 1613577 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1007 12:44:44.987206 1613577 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1007 12:44:44.987283 1613577 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-842812
	I1007 12:44:45.000447 1613577 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38196 SSHKeyPath:/home/jenkins/minikube-integration/19763-1394934/.minikube/machines/no-preload-842812/id_rsa Username:docker}
	I1007 12:44:45.011690 1613577 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38196 SSHKeyPath:/home/jenkins/minikube-integration/19763-1394934/.minikube/machines/no-preload-842812/id_rsa Username:docker}
	I1007 12:44:45.078626 1613577 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1007 12:44:45.078651 1613577 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1007 12:44:45.078735 1613577 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-842812
	I1007 12:44:45.086566 1613577 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38196 SSHKeyPath:/home/jenkins/minikube-integration/19763-1394934/.minikube/machines/no-preload-842812/id_rsa Username:docker}
	I1007 12:44:45.117535 1613577 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38196 SSHKeyPath:/home/jenkins/minikube-integration/19763-1394934/.minikube/machines/no-preload-842812/id_rsa Username:docker}
	I1007 12:44:45.147013 1613577 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1007 12:44:45.270661 1613577 addons.go:431] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1007 12:44:45.270688 1613577 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1007 12:44:45.282350 1613577 node_ready.go:35] waiting up to 6m0s for node "no-preload-842812" to be "Ready" ...
	I1007 12:44:45.328092 1613577 addons.go:431] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1007 12:44:45.328164 1613577 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1007 12:44:45.368482 1613577 addons.go:431] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1007 12:44:45.368557 1613577 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1007 12:44:45.459970 1613577 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1007 12:44:45.466319 1613577 addons.go:431] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1007 12:44:45.466391 1613577 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1007 12:44:45.466422 1613577 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1007 12:44:45.494507 1613577 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1007 12:44:45.494579 1613577 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1007 12:44:45.499101 1613577 addons.go:431] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1007 12:44:45.499128 1613577 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1007 12:44:45.579514 1613577 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1007 12:44:45.579552 1613577 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1007 12:44:45.585385 1613577 addons.go:431] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1007 12:44:45.585412 1613577 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1007 12:44:45.688458 1613577 addons.go:431] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1007 12:44:45.688487 1613577 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1007 12:44:45.758934 1613577 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1007 12:44:45.758956 1613577 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1007 12:44:45.935121 1613577 addons.go:431] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1007 12:44:45.935148 1613577 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1007 12:44:45.937224 1613577 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1007 12:44:46.100074 1613577 addons.go:431] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1007 12:44:46.100103 1613577 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1007 12:44:46.183411 1613577 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1007 12:44:47.203169 1605045 pod_ready.go:103] pod "metrics-server-9975d5f86-vkrtw" in "kube-system" namespace has status "Ready":"False"
	I1007 12:44:49.701442 1605045 pod_ready.go:103] pod "metrics-server-9975d5f86-vkrtw" in "kube-system" namespace has status "Ready":"False"
	I1007 12:44:49.668261 1613577 node_ready.go:49] node "no-preload-842812" has status "Ready":"True"
	I1007 12:44:49.668292 1613577 node_ready.go:38] duration metric: took 4.385909987s for node "no-preload-842812" to be "Ready" ...
	I1007 12:44:49.668303 1613577 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1007 12:44:49.694532 1613577 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-rbj2k" in "kube-system" namespace to be "Ready" ...
	I1007 12:44:49.746372 1613577 pod_ready.go:93] pod "coredns-7c65d6cfc9-rbj2k" in "kube-system" namespace has status "Ready":"True"
	I1007 12:44:49.746400 1613577 pod_ready.go:82] duration metric: took 51.832161ms for pod "coredns-7c65d6cfc9-rbj2k" in "kube-system" namespace to be "Ready" ...
	I1007 12:44:49.746414 1613577 pod_ready.go:79] waiting up to 6m0s for pod "etcd-no-preload-842812" in "kube-system" namespace to be "Ready" ...
	I1007 12:44:49.774560 1613577 pod_ready.go:93] pod "etcd-no-preload-842812" in "kube-system" namespace has status "Ready":"True"
	I1007 12:44:49.774589 1613577 pod_ready.go:82] duration metric: took 28.166883ms for pod "etcd-no-preload-842812" in "kube-system" namespace to be "Ready" ...
	I1007 12:44:49.774605 1613577 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-no-preload-842812" in "kube-system" namespace to be "Ready" ...
	I1007 12:44:49.788737 1613577 pod_ready.go:93] pod "kube-apiserver-no-preload-842812" in "kube-system" namespace has status "Ready":"True"
	I1007 12:44:49.788764 1613577 pod_ready.go:82] duration metric: took 14.15107ms for pod "kube-apiserver-no-preload-842812" in "kube-system" namespace to be "Ready" ...
	I1007 12:44:49.788777 1613577 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-no-preload-842812" in "kube-system" namespace to be "Ready" ...
	I1007 12:44:49.810358 1613577 pod_ready.go:93] pod "kube-controller-manager-no-preload-842812" in "kube-system" namespace has status "Ready":"True"
	I1007 12:44:49.810386 1613577 pod_ready.go:82] duration metric: took 21.60148ms for pod "kube-controller-manager-no-preload-842812" in "kube-system" namespace to be "Ready" ...
	I1007 12:44:49.810400 1613577 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-h8rl7" in "kube-system" namespace to be "Ready" ...
	I1007 12:44:49.871802 1613577 pod_ready.go:93] pod "kube-proxy-h8rl7" in "kube-system" namespace has status "Ready":"True"
	I1007 12:44:49.871871 1613577 pod_ready.go:82] duration metric: took 61.462224ms for pod "kube-proxy-h8rl7" in "kube-system" namespace to be "Ready" ...
	I1007 12:44:49.871897 1613577 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-no-preload-842812" in "kube-system" namespace to be "Ready" ...
	I1007 12:44:50.005390 1613577 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (4.545344258s)
	I1007 12:44:50.272419 1613577 pod_ready.go:93] pod "kube-scheduler-no-preload-842812" in "kube-system" namespace has status "Ready":"True"
	I1007 12:44:50.272496 1613577 pod_ready.go:82] duration metric: took 400.578665ms for pod "kube-scheduler-no-preload-842812" in "kube-system" namespace to be "Ready" ...
	I1007 12:44:50.272524 1613577 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-6867b74b74-q6rhq" in "kube-system" namespace to be "Ready" ...
	I1007 12:44:52.290342 1613577 pod_ready.go:103] pod "metrics-server-6867b74b74-q6rhq" in "kube-system" namespace has status "Ready":"False"
	I1007 12:44:52.581193 1613577 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (7.114736395s)
	I1007 12:44:52.581447 1613577 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (6.644173945s)
	I1007 12:44:52.581502 1613577 addons.go:475] Verifying addon metrics-server=true in "no-preload-842812"
	I1007 12:44:52.581691 1613577 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (6.398235295s)
	I1007 12:44:52.584732 1613577 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p no-preload-842812 addons enable metrics-server
	
	I1007 12:44:52.587412 1613577 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server, dashboard
	I1007 12:44:52.199748 1605045 pod_ready.go:103] pod "metrics-server-9975d5f86-vkrtw" in "kube-system" namespace has status "Ready":"False"
	I1007 12:44:54.701076 1605045 pod_ready.go:103] pod "metrics-server-9975d5f86-vkrtw" in "kube-system" namespace has status "Ready":"False"
	I1007 12:44:52.590023 1613577 addons.go:510] duration metric: took 7.716893913s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server dashboard]
	I1007 12:44:54.778553 1613577 pod_ready.go:103] pod "metrics-server-6867b74b74-q6rhq" in "kube-system" namespace has status "Ready":"False"
	I1007 12:44:56.779968 1613577 pod_ready.go:103] pod "metrics-server-6867b74b74-q6rhq" in "kube-system" namespace has status "Ready":"False"
	I1007 12:44:57.198747 1605045 pod_ready.go:103] pod "metrics-server-9975d5f86-vkrtw" in "kube-system" namespace has status "Ready":"False"
	I1007 12:44:59.199738 1605045 pod_ready.go:103] pod "metrics-server-9975d5f86-vkrtw" in "kube-system" namespace has status "Ready":"False"
	I1007 12:44:59.281463 1613577 pod_ready.go:103] pod "metrics-server-6867b74b74-q6rhq" in "kube-system" namespace has status "Ready":"False"
	I1007 12:45:01.780736 1613577 pod_ready.go:103] pod "metrics-server-6867b74b74-q6rhq" in "kube-system" namespace has status "Ready":"False"
	I1007 12:45:01.702620 1605045 pod_ready.go:103] pod "metrics-server-9975d5f86-vkrtw" in "kube-system" namespace has status "Ready":"False"
	I1007 12:45:04.202074 1605045 pod_ready.go:103] pod "metrics-server-9975d5f86-vkrtw" in "kube-system" namespace has status "Ready":"False"
	I1007 12:45:04.278520 1613577 pod_ready.go:103] pod "metrics-server-6867b74b74-q6rhq" in "kube-system" namespace has status "Ready":"False"
	I1007 12:45:06.278966 1613577 pod_ready.go:103] pod "metrics-server-6867b74b74-q6rhq" in "kube-system" namespace has status "Ready":"False"
	I1007 12:45:06.697774 1605045 pod_ready.go:103] pod "metrics-server-9975d5f86-vkrtw" in "kube-system" namespace has status "Ready":"False"
	I1007 12:45:09.197294 1605045 pod_ready.go:103] pod "metrics-server-9975d5f86-vkrtw" in "kube-system" namespace has status "Ready":"False"
	I1007 12:45:08.279180 1613577 pod_ready.go:103] pod "metrics-server-6867b74b74-q6rhq" in "kube-system" namespace has status "Ready":"False"
	I1007 12:45:10.279273 1613577 pod_ready.go:103] pod "metrics-server-6867b74b74-q6rhq" in "kube-system" namespace has status "Ready":"False"
	I1007 12:45:11.198135 1605045 pod_ready.go:103] pod "metrics-server-9975d5f86-vkrtw" in "kube-system" namespace has status "Ready":"False"
	I1007 12:45:13.699095 1605045 pod_ready.go:103] pod "metrics-server-9975d5f86-vkrtw" in "kube-system" namespace has status "Ready":"False"
	I1007 12:45:12.780259 1613577 pod_ready.go:103] pod "metrics-server-6867b74b74-q6rhq" in "kube-system" namespace has status "Ready":"False"
	I1007 12:45:15.278619 1613577 pod_ready.go:103] pod "metrics-server-6867b74b74-q6rhq" in "kube-system" namespace has status "Ready":"False"
	I1007 12:45:17.278882 1613577 pod_ready.go:103] pod "metrics-server-6867b74b74-q6rhq" in "kube-system" namespace has status "Ready":"False"
	I1007 12:45:16.198320 1605045 pod_ready.go:103] pod "metrics-server-9975d5f86-vkrtw" in "kube-system" namespace has status "Ready":"False"
	I1007 12:45:18.199073 1605045 pod_ready.go:103] pod "metrics-server-9975d5f86-vkrtw" in "kube-system" namespace has status "Ready":"False"
	I1007 12:45:19.778851 1613577 pod_ready.go:103] pod "metrics-server-6867b74b74-q6rhq" in "kube-system" namespace has status "Ready":"False"
	I1007 12:45:22.278178 1613577 pod_ready.go:103] pod "metrics-server-6867b74b74-q6rhq" in "kube-system" namespace has status "Ready":"False"
	I1007 12:45:20.199653 1605045 pod_ready.go:103] pod "metrics-server-9975d5f86-vkrtw" in "kube-system" namespace has status "Ready":"False"
	I1007 12:45:22.698908 1605045 pod_ready.go:103] pod "metrics-server-9975d5f86-vkrtw" in "kube-system" namespace has status "Ready":"False"
	I1007 12:45:24.764533 1605045 pod_ready.go:103] pod "metrics-server-9975d5f86-vkrtw" in "kube-system" namespace has status "Ready":"False"
	I1007 12:45:24.279369 1613577 pod_ready.go:103] pod "metrics-server-6867b74b74-q6rhq" in "kube-system" namespace has status "Ready":"False"
	I1007 12:45:26.779295 1613577 pod_ready.go:103] pod "metrics-server-6867b74b74-q6rhq" in "kube-system" namespace has status "Ready":"False"
	I1007 12:45:27.204475 1605045 pod_ready.go:103] pod "metrics-server-9975d5f86-vkrtw" in "kube-system" namespace has status "Ready":"False"
	I1007 12:45:29.697950 1605045 pod_ready.go:103] pod "metrics-server-9975d5f86-vkrtw" in "kube-system" namespace has status "Ready":"False"
	I1007 12:45:29.278650 1613577 pod_ready.go:103] pod "metrics-server-6867b74b74-q6rhq" in "kube-system" namespace has status "Ready":"False"
	I1007 12:45:31.778633 1613577 pod_ready.go:103] pod "metrics-server-6867b74b74-q6rhq" in "kube-system" namespace has status "Ready":"False"
	I1007 12:45:31.698174 1605045 pod_ready.go:103] pod "metrics-server-9975d5f86-vkrtw" in "kube-system" namespace has status "Ready":"False"
	I1007 12:45:34.198344 1605045 pod_ready.go:103] pod "metrics-server-9975d5f86-vkrtw" in "kube-system" namespace has status "Ready":"False"
	I1007 12:45:33.779105 1613577 pod_ready.go:103] pod "metrics-server-6867b74b74-q6rhq" in "kube-system" namespace has status "Ready":"False"
	I1007 12:45:35.779302 1613577 pod_ready.go:103] pod "metrics-server-6867b74b74-q6rhq" in "kube-system" namespace has status "Ready":"False"
	I1007 12:45:36.199034 1605045 pod_ready.go:103] pod "metrics-server-9975d5f86-vkrtw" in "kube-system" namespace has status "Ready":"False"
	I1007 12:45:38.698518 1605045 pod_ready.go:103] pod "metrics-server-9975d5f86-vkrtw" in "kube-system" namespace has status "Ready":"False"
	I1007 12:45:38.278402 1613577 pod_ready.go:103] pod "metrics-server-6867b74b74-q6rhq" in "kube-system" namespace has status "Ready":"False"
	I1007 12:45:40.278805 1613577 pod_ready.go:103] pod "metrics-server-6867b74b74-q6rhq" in "kube-system" namespace has status "Ready":"False"
	I1007 12:45:42.278903 1613577 pod_ready.go:103] pod "metrics-server-6867b74b74-q6rhq" in "kube-system" namespace has status "Ready":"False"
	I1007 12:45:41.197739 1605045 pod_ready.go:103] pod "metrics-server-9975d5f86-vkrtw" in "kube-system" namespace has status "Ready":"False"
	I1007 12:45:43.198274 1605045 pod_ready.go:103] pod "metrics-server-9975d5f86-vkrtw" in "kube-system" namespace has status "Ready":"False"
	I1007 12:45:44.779008 1613577 pod_ready.go:103] pod "metrics-server-6867b74b74-q6rhq" in "kube-system" namespace has status "Ready":"False"
	I1007 12:45:47.278356 1613577 pod_ready.go:103] pod "metrics-server-6867b74b74-q6rhq" in "kube-system" namespace has status "Ready":"False"
	I1007 12:45:45.200214 1605045 pod_ready.go:103] pod "metrics-server-9975d5f86-vkrtw" in "kube-system" namespace has status "Ready":"False"
	I1007 12:45:47.698465 1605045 pod_ready.go:103] pod "metrics-server-9975d5f86-vkrtw" in "kube-system" namespace has status "Ready":"False"
	I1007 12:45:49.778375 1613577 pod_ready.go:103] pod "metrics-server-6867b74b74-q6rhq" in "kube-system" namespace has status "Ready":"False"
	I1007 12:45:51.779736 1613577 pod_ready.go:103] pod "metrics-server-6867b74b74-q6rhq" in "kube-system" namespace has status "Ready":"False"
	I1007 12:45:50.209612 1605045 pod_ready.go:103] pod "metrics-server-9975d5f86-vkrtw" in "kube-system" namespace has status "Ready":"False"
	I1007 12:45:52.698032 1605045 pod_ready.go:103] pod "metrics-server-9975d5f86-vkrtw" in "kube-system" namespace has status "Ready":"False"
	I1007 12:45:54.698529 1605045 pod_ready.go:103] pod "metrics-server-9975d5f86-vkrtw" in "kube-system" namespace has status "Ready":"False"
	I1007 12:45:54.278296 1613577 pod_ready.go:103] pod "metrics-server-6867b74b74-q6rhq" in "kube-system" namespace has status "Ready":"False"
	I1007 12:45:56.279148 1613577 pod_ready.go:103] pod "metrics-server-6867b74b74-q6rhq" in "kube-system" namespace has status "Ready":"False"
	I1007 12:45:57.199042 1605045 pod_ready.go:103] pod "metrics-server-9975d5f86-vkrtw" in "kube-system" namespace has status "Ready":"False"
	I1007 12:45:59.199947 1605045 pod_ready.go:103] pod "metrics-server-9975d5f86-vkrtw" in "kube-system" namespace has status "Ready":"False"
	I1007 12:45:58.779068 1613577 pod_ready.go:103] pod "metrics-server-6867b74b74-q6rhq" in "kube-system" namespace has status "Ready":"False"
	I1007 12:46:00.779588 1613577 pod_ready.go:103] pod "metrics-server-6867b74b74-q6rhq" in "kube-system" namespace has status "Ready":"False"
	I1007 12:46:01.699172 1605045 pod_ready.go:103] pod "metrics-server-9975d5f86-vkrtw" in "kube-system" namespace has status "Ready":"False"
	I1007 12:46:04.198094 1605045 pod_ready.go:103] pod "metrics-server-9975d5f86-vkrtw" in "kube-system" namespace has status "Ready":"False"
	I1007 12:46:03.278629 1613577 pod_ready.go:103] pod "metrics-server-6867b74b74-q6rhq" in "kube-system" namespace has status "Ready":"False"
	I1007 12:46:05.279301 1613577 pod_ready.go:103] pod "metrics-server-6867b74b74-q6rhq" in "kube-system" namespace has status "Ready":"False"
	I1007 12:46:06.200977 1605045 pod_ready.go:103] pod "metrics-server-9975d5f86-vkrtw" in "kube-system" namespace has status "Ready":"False"
	I1007 12:46:08.698731 1605045 pod_ready.go:103] pod "metrics-server-9975d5f86-vkrtw" in "kube-system" namespace has status "Ready":"False"
	I1007 12:46:07.779057 1613577 pod_ready.go:103] pod "metrics-server-6867b74b74-q6rhq" in "kube-system" namespace has status "Ready":"False"
	I1007 12:46:09.779346 1613577 pod_ready.go:103] pod "metrics-server-6867b74b74-q6rhq" in "kube-system" namespace has status "Ready":"False"
	I1007 12:46:12.278256 1613577 pod_ready.go:103] pod "metrics-server-6867b74b74-q6rhq" in "kube-system" namespace has status "Ready":"False"
	I1007 12:46:11.197742 1605045 pod_ready.go:103] pod "metrics-server-9975d5f86-vkrtw" in "kube-system" namespace has status "Ready":"False"
	I1007 12:46:13.697536 1605045 pod_ready.go:103] pod "metrics-server-9975d5f86-vkrtw" in "kube-system" namespace has status "Ready":"False"
	I1007 12:46:14.280204 1613577 pod_ready.go:103] pod "metrics-server-6867b74b74-q6rhq" in "kube-system" namespace has status "Ready":"False"
	I1007 12:46:16.778594 1613577 pod_ready.go:103] pod "metrics-server-6867b74b74-q6rhq" in "kube-system" namespace has status "Ready":"False"
	I1007 12:46:15.698350 1605045 pod_ready.go:103] pod "metrics-server-9975d5f86-vkrtw" in "kube-system" namespace has status "Ready":"False"
	I1007 12:46:17.698902 1605045 pod_ready.go:103] pod "metrics-server-9975d5f86-vkrtw" in "kube-system" namespace has status "Ready":"False"
	I1007 12:46:18.778681 1613577 pod_ready.go:103] pod "metrics-server-6867b74b74-q6rhq" in "kube-system" namespace has status "Ready":"False"
	I1007 12:46:21.278024 1613577 pod_ready.go:103] pod "metrics-server-6867b74b74-q6rhq" in "kube-system" namespace has status "Ready":"False"
	I1007 12:46:20.198118 1605045 pod_ready.go:103] pod "metrics-server-9975d5f86-vkrtw" in "kube-system" namespace has status "Ready":"False"
	I1007 12:46:22.198991 1605045 pod_ready.go:103] pod "metrics-server-9975d5f86-vkrtw" in "kube-system" namespace has status "Ready":"False"
	I1007 12:46:24.698954 1605045 pod_ready.go:103] pod "metrics-server-9975d5f86-vkrtw" in "kube-system" namespace has status "Ready":"False"
	I1007 12:46:23.278527 1613577 pod_ready.go:103] pod "metrics-server-6867b74b74-q6rhq" in "kube-system" namespace has status "Ready":"False"
	I1007 12:46:25.279247 1613577 pod_ready.go:103] pod "metrics-server-6867b74b74-q6rhq" in "kube-system" namespace has status "Ready":"False"
	I1007 12:46:27.197821 1605045 pod_ready.go:103] pod "metrics-server-9975d5f86-vkrtw" in "kube-system" namespace has status "Ready":"False"
	I1007 12:46:29.197963 1605045 pod_ready.go:103] pod "metrics-server-9975d5f86-vkrtw" in "kube-system" namespace has status "Ready":"False"
	I1007 12:46:27.779142 1613577 pod_ready.go:103] pod "metrics-server-6867b74b74-q6rhq" in "kube-system" namespace has status "Ready":"False"
	I1007 12:46:30.278177 1613577 pod_ready.go:103] pod "metrics-server-6867b74b74-q6rhq" in "kube-system" namespace has status "Ready":"False"
	I1007 12:46:32.278689 1613577 pod_ready.go:103] pod "metrics-server-6867b74b74-q6rhq" in "kube-system" namespace has status "Ready":"False"
	I1007 12:46:31.698121 1605045 pod_ready.go:103] pod "metrics-server-9975d5f86-vkrtw" in "kube-system" namespace has status "Ready":"False"
	I1007 12:46:33.698840 1605045 pod_ready.go:103] pod "metrics-server-9975d5f86-vkrtw" in "kube-system" namespace has status "Ready":"False"
	I1007 12:46:34.778393 1613577 pod_ready.go:103] pod "metrics-server-6867b74b74-q6rhq" in "kube-system" namespace has status "Ready":"False"
	I1007 12:46:36.782118 1613577 pod_ready.go:103] pod "metrics-server-6867b74b74-q6rhq" in "kube-system" namespace has status "Ready":"False"
	I1007 12:46:35.699055 1605045 pod_ready.go:103] pod "metrics-server-9975d5f86-vkrtw" in "kube-system" namespace has status "Ready":"False"
	I1007 12:46:38.198461 1605045 pod_ready.go:103] pod "metrics-server-9975d5f86-vkrtw" in "kube-system" namespace has status "Ready":"False"
	I1007 12:46:39.279090 1613577 pod_ready.go:103] pod "metrics-server-6867b74b74-q6rhq" in "kube-system" namespace has status "Ready":"False"
	I1007 12:46:41.778877 1613577 pod_ready.go:103] pod "metrics-server-6867b74b74-q6rhq" in "kube-system" namespace has status "Ready":"False"
	I1007 12:46:40.201085 1605045 pod_ready.go:103] pod "metrics-server-9975d5f86-vkrtw" in "kube-system" namespace has status "Ready":"False"
	I1007 12:46:42.697962 1605045 pod_ready.go:103] pod "metrics-server-9975d5f86-vkrtw" in "kube-system" namespace has status "Ready":"False"
	I1007 12:46:44.699060 1605045 pod_ready.go:103] pod "metrics-server-9975d5f86-vkrtw" in "kube-system" namespace has status "Ready":"False"
	I1007 12:46:43.779130 1613577 pod_ready.go:103] pod "metrics-server-6867b74b74-q6rhq" in "kube-system" namespace has status "Ready":"False"
	I1007 12:46:46.278922 1613577 pod_ready.go:103] pod "metrics-server-6867b74b74-q6rhq" in "kube-system" namespace has status "Ready":"False"
	I1007 12:46:47.198069 1605045 pod_ready.go:103] pod "metrics-server-9975d5f86-vkrtw" in "kube-system" namespace has status "Ready":"False"
	I1007 12:46:49.697694 1605045 pod_ready.go:103] pod "metrics-server-9975d5f86-vkrtw" in "kube-system" namespace has status "Ready":"False"
	I1007 12:46:48.782348 1613577 pod_ready.go:103] pod "metrics-server-6867b74b74-q6rhq" in "kube-system" namespace has status "Ready":"False"
	I1007 12:46:51.278700 1613577 pod_ready.go:103] pod "metrics-server-6867b74b74-q6rhq" in "kube-system" namespace has status "Ready":"False"
	I1007 12:46:51.698530 1605045 pod_ready.go:103] pod "metrics-server-9975d5f86-vkrtw" in "kube-system" namespace has status "Ready":"False"
	I1007 12:46:53.712643 1605045 pod_ready.go:103] pod "metrics-server-9975d5f86-vkrtw" in "kube-system" namespace has status "Ready":"False"
	I1007 12:46:53.279052 1613577 pod_ready.go:103] pod "metrics-server-6867b74b74-q6rhq" in "kube-system" namespace has status "Ready":"False"
	I1007 12:46:55.778671 1613577 pod_ready.go:103] pod "metrics-server-6867b74b74-q6rhq" in "kube-system" namespace has status "Ready":"False"
	I1007 12:46:56.198411 1605045 pod_ready.go:103] pod "metrics-server-9975d5f86-vkrtw" in "kube-system" namespace has status "Ready":"False"
	I1007 12:46:58.698498 1605045 pod_ready.go:103] pod "metrics-server-9975d5f86-vkrtw" in "kube-system" namespace has status "Ready":"False"
	I1007 12:46:58.278866 1613577 pod_ready.go:103] pod "metrics-server-6867b74b74-q6rhq" in "kube-system" namespace has status "Ready":"False"
	I1007 12:47:00.280312 1613577 pod_ready.go:103] pod "metrics-server-6867b74b74-q6rhq" in "kube-system" namespace has status "Ready":"False"
	I1007 12:47:02.280505 1613577 pod_ready.go:103] pod "metrics-server-6867b74b74-q6rhq" in "kube-system" namespace has status "Ready":"False"
	I1007 12:47:00.698549 1605045 pod_ready.go:103] pod "metrics-server-9975d5f86-vkrtw" in "kube-system" namespace has status "Ready":"False"
	I1007 12:47:02.698653 1605045 pod_ready.go:103] pod "metrics-server-9975d5f86-vkrtw" in "kube-system" namespace has status "Ready":"False"
	I1007 12:47:04.778478 1613577 pod_ready.go:103] pod "metrics-server-6867b74b74-q6rhq" in "kube-system" namespace has status "Ready":"False"
	I1007 12:47:06.779099 1613577 pod_ready.go:103] pod "metrics-server-6867b74b74-q6rhq" in "kube-system" namespace has status "Ready":"False"
	I1007 12:47:05.199908 1605045 pod_ready.go:103] pod "metrics-server-9975d5f86-vkrtw" in "kube-system" namespace has status "Ready":"False"
	I1007 12:47:07.699357 1605045 pod_ready.go:103] pod "metrics-server-9975d5f86-vkrtw" in "kube-system" namespace has status "Ready":"False"
	I1007 12:47:08.785698 1613577 pod_ready.go:103] pod "metrics-server-6867b74b74-q6rhq" in "kube-system" namespace has status "Ready":"False"
	I1007 12:47:11.278950 1613577 pod_ready.go:103] pod "metrics-server-6867b74b74-q6rhq" in "kube-system" namespace has status "Ready":"False"
	I1007 12:47:10.198724 1605045 pod_ready.go:103] pod "metrics-server-9975d5f86-vkrtw" in "kube-system" namespace has status "Ready":"False"
	I1007 12:47:12.199314 1605045 pod_ready.go:103] pod "metrics-server-9975d5f86-vkrtw" in "kube-system" namespace has status "Ready":"False"
	I1007 12:47:14.698621 1605045 pod_ready.go:103] pod "metrics-server-9975d5f86-vkrtw" in "kube-system" namespace has status "Ready":"False"
	I1007 12:47:13.778970 1613577 pod_ready.go:103] pod "metrics-server-6867b74b74-q6rhq" in "kube-system" namespace has status "Ready":"False"
	I1007 12:47:16.279194 1613577 pod_ready.go:103] pod "metrics-server-6867b74b74-q6rhq" in "kube-system" namespace has status "Ready":"False"
	I1007 12:47:16.698797 1605045 pod_ready.go:103] pod "metrics-server-9975d5f86-vkrtw" in "kube-system" namespace has status "Ready":"False"
	I1007 12:47:19.197468 1605045 pod_ready.go:103] pod "metrics-server-9975d5f86-vkrtw" in "kube-system" namespace has status "Ready":"False"
	I1007 12:47:18.778459 1613577 pod_ready.go:103] pod "metrics-server-6867b74b74-q6rhq" in "kube-system" namespace has status "Ready":"False"
	I1007 12:47:20.779021 1613577 pod_ready.go:103] pod "metrics-server-6867b74b74-q6rhq" in "kube-system" namespace has status "Ready":"False"
	I1007 12:47:21.197672 1605045 pod_ready.go:103] pod "metrics-server-9975d5f86-vkrtw" in "kube-system" namespace has status "Ready":"False"
	I1007 12:47:23.198193 1605045 pod_ready.go:103] pod "metrics-server-9975d5f86-vkrtw" in "kube-system" namespace has status "Ready":"False"
	I1007 12:47:22.779235 1613577 pod_ready.go:103] pod "metrics-server-6867b74b74-q6rhq" in "kube-system" namespace has status "Ready":"False"
	I1007 12:47:25.279872 1613577 pod_ready.go:103] pod "metrics-server-6867b74b74-q6rhq" in "kube-system" namespace has status "Ready":"False"
	I1007 12:47:27.280700 1613577 pod_ready.go:103] pod "metrics-server-6867b74b74-q6rhq" in "kube-system" namespace has status "Ready":"False"
	I1007 12:47:25.779910 1605045 pod_ready.go:103] pod "metrics-server-9975d5f86-vkrtw" in "kube-system" namespace has status "Ready":"False"
	I1007 12:47:28.199890 1605045 pod_ready.go:103] pod "metrics-server-9975d5f86-vkrtw" in "kube-system" namespace has status "Ready":"False"
	I1007 12:47:29.778483 1613577 pod_ready.go:103] pod "metrics-server-6867b74b74-q6rhq" in "kube-system" namespace has status "Ready":"False"
	I1007 12:47:31.779257 1613577 pod_ready.go:103] pod "metrics-server-6867b74b74-q6rhq" in "kube-system" namespace has status "Ready":"False"
	I1007 12:47:30.698423 1605045 pod_ready.go:103] pod "metrics-server-9975d5f86-vkrtw" in "kube-system" namespace has status "Ready":"False"
	I1007 12:47:32.699068 1605045 pod_ready.go:103] pod "metrics-server-9975d5f86-vkrtw" in "kube-system" namespace has status "Ready":"False"
	I1007 12:47:34.279454 1613577 pod_ready.go:103] pod "metrics-server-6867b74b74-q6rhq" in "kube-system" namespace has status "Ready":"False"
	I1007 12:47:36.779091 1613577 pod_ready.go:103] pod "metrics-server-6867b74b74-q6rhq" in "kube-system" namespace has status "Ready":"False"
	I1007 12:47:35.197849 1605045 pod_ready.go:103] pod "metrics-server-9975d5f86-vkrtw" in "kube-system" namespace has status "Ready":"False"
	I1007 12:47:37.198061 1605045 pod_ready.go:103] pod "metrics-server-9975d5f86-vkrtw" in "kube-system" namespace has status "Ready":"False"
	I1007 12:47:39.198266 1605045 pod_ready.go:103] pod "metrics-server-9975d5f86-vkrtw" in "kube-system" namespace has status "Ready":"False"
	I1007 12:47:39.278668 1613577 pod_ready.go:103] pod "metrics-server-6867b74b74-q6rhq" in "kube-system" namespace has status "Ready":"False"
	I1007 12:47:41.279008 1613577 pod_ready.go:103] pod "metrics-server-6867b74b74-q6rhq" in "kube-system" namespace has status "Ready":"False"
	I1007 12:47:41.698409 1605045 pod_ready.go:103] pod "metrics-server-9975d5f86-vkrtw" in "kube-system" namespace has status "Ready":"False"
	I1007 12:47:43.698526 1605045 pod_ready.go:103] pod "metrics-server-9975d5f86-vkrtw" in "kube-system" namespace has status "Ready":"False"
	I1007 12:47:43.778985 1613577 pod_ready.go:103] pod "metrics-server-6867b74b74-q6rhq" in "kube-system" namespace has status "Ready":"False"
	I1007 12:47:45.779045 1613577 pod_ready.go:103] pod "metrics-server-6867b74b74-q6rhq" in "kube-system" namespace has status "Ready":"False"
	I1007 12:47:46.197607 1605045 pod_ready.go:103] pod "metrics-server-9975d5f86-vkrtw" in "kube-system" namespace has status "Ready":"False"
	I1007 12:47:48.698495 1605045 pod_ready.go:103] pod "metrics-server-9975d5f86-vkrtw" in "kube-system" namespace has status "Ready":"False"
	I1007 12:47:47.779497 1613577 pod_ready.go:103] pod "metrics-server-6867b74b74-q6rhq" in "kube-system" namespace has status "Ready":"False"
	I1007 12:47:50.278772 1613577 pod_ready.go:103] pod "metrics-server-6867b74b74-q6rhq" in "kube-system" namespace has status "Ready":"False"
	I1007 12:47:50.698560 1605045 pod_ready.go:103] pod "metrics-server-9975d5f86-vkrtw" in "kube-system" namespace has status "Ready":"False"
	I1007 12:47:52.698657 1605045 pod_ready.go:103] pod "metrics-server-9975d5f86-vkrtw" in "kube-system" namespace has status "Ready":"False"
	I1007 12:47:52.778666 1613577 pod_ready.go:103] pod "metrics-server-6867b74b74-q6rhq" in "kube-system" namespace has status "Ready":"False"
	I1007 12:47:55.278952 1613577 pod_ready.go:103] pod "metrics-server-6867b74b74-q6rhq" in "kube-system" namespace has status "Ready":"False"
	I1007 12:47:55.198421 1605045 pod_ready.go:103] pod "metrics-server-9975d5f86-vkrtw" in "kube-system" namespace has status "Ready":"False"
	I1007 12:47:57.198532 1605045 pod_ready.go:103] pod "metrics-server-9975d5f86-vkrtw" in "kube-system" namespace has status "Ready":"False"
	I1007 12:47:59.198676 1605045 pod_ready.go:103] pod "metrics-server-9975d5f86-vkrtw" in "kube-system" namespace has status "Ready":"False"
	I1007 12:47:57.779283 1613577 pod_ready.go:103] pod "metrics-server-6867b74b74-q6rhq" in "kube-system" namespace has status "Ready":"False"
	I1007 12:48:00.291604 1613577 pod_ready.go:103] pod "metrics-server-6867b74b74-q6rhq" in "kube-system" namespace has status "Ready":"False"
	I1007 12:48:01.699140 1605045 pod_ready.go:103] pod "metrics-server-9975d5f86-vkrtw" in "kube-system" namespace has status "Ready":"False"
	I1007 12:48:04.198451 1605045 pod_ready.go:103] pod "metrics-server-9975d5f86-vkrtw" in "kube-system" namespace has status "Ready":"False"
	I1007 12:48:02.778760 1613577 pod_ready.go:103] pod "metrics-server-6867b74b74-q6rhq" in "kube-system" namespace has status "Ready":"False"
	I1007 12:48:04.778995 1613577 pod_ready.go:103] pod "metrics-server-6867b74b74-q6rhq" in "kube-system" namespace has status "Ready":"False"
	I1007 12:48:06.779086 1613577 pod_ready.go:103] pod "metrics-server-6867b74b74-q6rhq" in "kube-system" namespace has status "Ready":"False"
	I1007 12:48:06.698886 1605045 pod_ready.go:103] pod "metrics-server-9975d5f86-vkrtw" in "kube-system" namespace has status "Ready":"False"
	I1007 12:48:09.198184 1605045 pod_ready.go:103] pod "metrics-server-9975d5f86-vkrtw" in "kube-system" namespace has status "Ready":"False"
	I1007 12:48:09.278443 1613577 pod_ready.go:103] pod "metrics-server-6867b74b74-q6rhq" in "kube-system" namespace has status "Ready":"False"
	I1007 12:48:11.777986 1613577 pod_ready.go:103] pod "metrics-server-6867b74b74-q6rhq" in "kube-system" namespace has status "Ready":"False"
	I1007 12:48:11.698682 1605045 pod_ready.go:103] pod "metrics-server-9975d5f86-vkrtw" in "kube-system" namespace has status "Ready":"False"
	I1007 12:48:13.699153 1605045 pod_ready.go:103] pod "metrics-server-9975d5f86-vkrtw" in "kube-system" namespace has status "Ready":"False"
	I1007 12:48:13.778601 1613577 pod_ready.go:103] pod "metrics-server-6867b74b74-q6rhq" in "kube-system" namespace has status "Ready":"False"
	I1007 12:48:16.279450 1613577 pod_ready.go:103] pod "metrics-server-6867b74b74-q6rhq" in "kube-system" namespace has status "Ready":"False"
	I1007 12:48:16.245742 1605045 pod_ready.go:103] pod "metrics-server-9975d5f86-vkrtw" in "kube-system" namespace has status "Ready":"False"
	I1007 12:48:18.699289 1605045 pod_ready.go:103] pod "metrics-server-9975d5f86-vkrtw" in "kube-system" namespace has status "Ready":"False"
	I1007 12:48:18.782934 1613577 pod_ready.go:103] pod "metrics-server-6867b74b74-q6rhq" in "kube-system" namespace has status "Ready":"False"
	I1007 12:48:21.278680 1613577 pod_ready.go:103] pod "metrics-server-6867b74b74-q6rhq" in "kube-system" namespace has status "Ready":"False"
	I1007 12:48:21.198024 1605045 pod_ready.go:103] pod "metrics-server-9975d5f86-vkrtw" in "kube-system" namespace has status "Ready":"False"
	I1007 12:48:23.198132 1605045 pod_ready.go:103] pod "metrics-server-9975d5f86-vkrtw" in "kube-system" namespace has status "Ready":"False"
	I1007 12:48:23.279270 1613577 pod_ready.go:103] pod "metrics-server-6867b74b74-q6rhq" in "kube-system" namespace has status "Ready":"False"
	I1007 12:48:25.279615 1613577 pod_ready.go:103] pod "metrics-server-6867b74b74-q6rhq" in "kube-system" namespace has status "Ready":"False"
	I1007 12:48:25.701146 1605045 pod_ready.go:103] pod "metrics-server-9975d5f86-vkrtw" in "kube-system" namespace has status "Ready":"False"
	I1007 12:48:28.198056 1605045 pod_ready.go:103] pod "metrics-server-9975d5f86-vkrtw" in "kube-system" namespace has status "Ready":"False"
	I1007 12:48:27.778789 1613577 pod_ready.go:103] pod "metrics-server-6867b74b74-q6rhq" in "kube-system" namespace has status "Ready":"False"
	I1007 12:48:30.278990 1613577 pod_ready.go:103] pod "metrics-server-6867b74b74-q6rhq" in "kube-system" namespace has status "Ready":"False"
	I1007 12:48:32.279637 1613577 pod_ready.go:103] pod "metrics-server-6867b74b74-q6rhq" in "kube-system" namespace has status "Ready":"False"
	I1007 12:48:30.198477 1605045 pod_ready.go:103] pod "metrics-server-9975d5f86-vkrtw" in "kube-system" namespace has status "Ready":"False"
	I1007 12:48:32.198858 1605045 pod_ready.go:103] pod "metrics-server-9975d5f86-vkrtw" in "kube-system" namespace has status "Ready":"False"
	I1007 12:48:33.698488 1605045 pod_ready.go:82] duration metric: took 4m0.006757182s for pod "metrics-server-9975d5f86-vkrtw" in "kube-system" namespace to be "Ready" ...
	E1007 12:48:33.698517 1605045 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I1007 12:48:33.698528 1605045 pod_ready.go:39] duration metric: took 5m19.765021471s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1007 12:48:33.698541 1605045 api_server.go:52] waiting for apiserver process to appear ...
	I1007 12:48:33.698570 1605045 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1007 12:48:33.698639 1605045 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1007 12:48:33.745225 1605045 cri.go:89] found id: "c560d5c1bf2a37b5133996a47b9fd87c6142a14ff27e284676c132938457dd1a"
	I1007 12:48:33.745249 1605045 cri.go:89] found id: "087491a883dc7a00d19cb0b6f1aa66d75a5fdf4285af40a17d54c064db2cfb38"
	I1007 12:48:33.745255 1605045 cri.go:89] found id: ""
	I1007 12:48:33.745262 1605045 logs.go:282] 2 containers: [c560d5c1bf2a37b5133996a47b9fd87c6142a14ff27e284676c132938457dd1a 087491a883dc7a00d19cb0b6f1aa66d75a5fdf4285af40a17d54c064db2cfb38]
	I1007 12:48:33.745321 1605045 ssh_runner.go:195] Run: which crictl
	I1007 12:48:33.748942 1605045 ssh_runner.go:195] Run: which crictl
	I1007 12:48:33.752463 1605045 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1007 12:48:33.752545 1605045 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1007 12:48:33.797525 1605045 cri.go:89] found id: "1e0947c0d1d23430fec3b7273c7a245208911a9fd18651ee2ea30452df93bfdc"
	I1007 12:48:33.797603 1605045 cri.go:89] found id: "1cda9d232b1a29681f59212c1398e49986b28471693e5b8ec3e1096d4b08664b"
	I1007 12:48:33.797624 1605045 cri.go:89] found id: ""
	I1007 12:48:33.797633 1605045 logs.go:282] 2 containers: [1e0947c0d1d23430fec3b7273c7a245208911a9fd18651ee2ea30452df93bfdc 1cda9d232b1a29681f59212c1398e49986b28471693e5b8ec3e1096d4b08664b]
	I1007 12:48:33.797702 1605045 ssh_runner.go:195] Run: which crictl
	I1007 12:48:33.801579 1605045 ssh_runner.go:195] Run: which crictl
	I1007 12:48:33.805144 1605045 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1007 12:48:33.805217 1605045 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1007 12:48:33.842578 1605045 cri.go:89] found id: "4bb2fa0793733df0e143059afecc82f6c14eff08bfc7bfdd197dd1f5722e7d55"
	I1007 12:48:33.842647 1605045 cri.go:89] found id: "8805bd4c8f5e006eb0f0abb45a40c3d0696ac379978061faa21852c4370a056a"
	I1007 12:48:33.842667 1605045 cri.go:89] found id: ""
	I1007 12:48:33.842687 1605045 logs.go:282] 2 containers: [4bb2fa0793733df0e143059afecc82f6c14eff08bfc7bfdd197dd1f5722e7d55 8805bd4c8f5e006eb0f0abb45a40c3d0696ac379978061faa21852c4370a056a]
	I1007 12:48:33.842825 1605045 ssh_runner.go:195] Run: which crictl
	I1007 12:48:33.846543 1605045 ssh_runner.go:195] Run: which crictl
	I1007 12:48:33.850168 1605045 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1007 12:48:33.850239 1605045 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1007 12:48:33.889762 1605045 cri.go:89] found id: "8f43fed348ba1cdb7ae28e497ad28f67977db09949a112d2d17870df4a117f0a"
	I1007 12:48:33.889784 1605045 cri.go:89] found id: "56445c334390ae25dcd546cbbe4c491fdc1651b3a5a4d8815c363c7a8bfbb990"
	I1007 12:48:33.889789 1605045 cri.go:89] found id: ""
	I1007 12:48:33.889796 1605045 logs.go:282] 2 containers: [8f43fed348ba1cdb7ae28e497ad28f67977db09949a112d2d17870df4a117f0a 56445c334390ae25dcd546cbbe4c491fdc1651b3a5a4d8815c363c7a8bfbb990]
	I1007 12:48:33.889854 1605045 ssh_runner.go:195] Run: which crictl
	I1007 12:48:33.893670 1605045 ssh_runner.go:195] Run: which crictl
	I1007 12:48:33.897461 1605045 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1007 12:48:33.897532 1605045 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1007 12:48:33.938164 1605045 cri.go:89] found id: "e0b54b6d912b9cac2cbd658159c366466e160b149d33915f3e96ca1d509f139a"
	I1007 12:48:33.938245 1605045 cri.go:89] found id: "1d7b57cafcedf477410702249e618a5b428f449129dfe864c4e92565b31aa7da"
	I1007 12:48:33.938258 1605045 cri.go:89] found id: ""
	I1007 12:48:33.938266 1605045 logs.go:282] 2 containers: [e0b54b6d912b9cac2cbd658159c366466e160b149d33915f3e96ca1d509f139a 1d7b57cafcedf477410702249e618a5b428f449129dfe864c4e92565b31aa7da]
	I1007 12:48:33.938330 1605045 ssh_runner.go:195] Run: which crictl
	I1007 12:48:33.942472 1605045 ssh_runner.go:195] Run: which crictl
	I1007 12:48:33.946703 1605045 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1007 12:48:33.946798 1605045 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1007 12:48:33.990267 1605045 cri.go:89] found id: "07a060d66ad9e5daa5e626e4bb7e26d30ff2ed7a40ea150c1cd31786b35e58df"
	I1007 12:48:33.990292 1605045 cri.go:89] found id: "1242dd7b1b01beccce95d3398556baedca1c8a45e6c29f05bf39e03c50f5f0ca"
	I1007 12:48:33.990300 1605045 cri.go:89] found id: ""
	I1007 12:48:33.990308 1605045 logs.go:282] 2 containers: [07a060d66ad9e5daa5e626e4bb7e26d30ff2ed7a40ea150c1cd31786b35e58df 1242dd7b1b01beccce95d3398556baedca1c8a45e6c29f05bf39e03c50f5f0ca]
	I1007 12:48:33.990370 1605045 ssh_runner.go:195] Run: which crictl
	I1007 12:48:33.994131 1605045 ssh_runner.go:195] Run: which crictl
	I1007 12:48:33.997636 1605045 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1007 12:48:33.997710 1605045 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1007 12:48:34.039420 1605045 cri.go:89] found id: "8cf6219e2513777cac33b5c910a4793a59ef8cc60b43f7ab0383fcb48b0249b4"
	I1007 12:48:34.039443 1605045 cri.go:89] found id: "9e5fd41f9e9f5bb083bcc0a91b11894d2f6d83235dbe7b915428c7f7f5fe0fa1"
	I1007 12:48:34.039448 1605045 cri.go:89] found id: ""
	I1007 12:48:34.039455 1605045 logs.go:282] 2 containers: [8cf6219e2513777cac33b5c910a4793a59ef8cc60b43f7ab0383fcb48b0249b4 9e5fd41f9e9f5bb083bcc0a91b11894d2f6d83235dbe7b915428c7f7f5fe0fa1]
	I1007 12:48:34.039583 1605045 ssh_runner.go:195] Run: which crictl
	I1007 12:48:34.043365 1605045 ssh_runner.go:195] Run: which crictl
	I1007 12:48:34.047381 1605045 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1007 12:48:34.047483 1605045 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1007 12:48:34.093112 1605045 cri.go:89] found id: "a0fd0c4da3d885bfd3e984d4118d66727c79c42fb58e4d7d63b72a2a3ad14444"
	I1007 12:48:34.093187 1605045 cri.go:89] found id: "4e279b29e878213ed1a543c131a7d9f90f5b59daeb83257e0bf1c6513ff53468"
	I1007 12:48:34.093208 1605045 cri.go:89] found id: ""
	I1007 12:48:34.093236 1605045 logs.go:282] 2 containers: [a0fd0c4da3d885bfd3e984d4118d66727c79c42fb58e4d7d63b72a2a3ad14444 4e279b29e878213ed1a543c131a7d9f90f5b59daeb83257e0bf1c6513ff53468]
	I1007 12:48:34.093313 1605045 ssh_runner.go:195] Run: which crictl
	I1007 12:48:34.099043 1605045 ssh_runner.go:195] Run: which crictl
	I1007 12:48:34.103144 1605045 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1007 12:48:34.103227 1605045 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1007 12:48:34.144123 1605045 cri.go:89] found id: "ee2b00422af4d052a720f617e5504040c0d7e9f2bff2f350b1c71f90cd16a1b0"
	I1007 12:48:34.144147 1605045 cri.go:89] found id: ""
	I1007 12:48:34.144154 1605045 logs.go:282] 1 containers: [ee2b00422af4d052a720f617e5504040c0d7e9f2bff2f350b1c71f90cd16a1b0]
	I1007 12:48:34.144211 1605045 ssh_runner.go:195] Run: which crictl
	I1007 12:48:34.148181 1605045 logs.go:123] Gathering logs for kubelet ...
	I1007 12:48:34.148221 1605045 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1007 12:48:34.200768 1605045 logs.go:138] Found kubelet problem: Oct 07 12:43:13 old-k8s-version-130031 kubelet[658]: E1007 12:43:13.872520     658 reflector.go:138] object-"kube-system"/"storage-provisioner-token-ktk5h": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "storage-provisioner-token-ktk5h" is forbidden: User "system:node:old-k8s-version-130031" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-130031' and this object
	W1007 12:48:34.201084 1605045 logs.go:138] Found kubelet problem: Oct 07 12:43:13 old-k8s-version-130031 kubelet[658]: E1007 12:43:13.891855     658 reflector.go:138] object-"kube-system"/"kindnet-token-srnxf": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kindnet-token-srnxf" is forbidden: User "system:node:old-k8s-version-130031" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-130031' and this object
	W1007 12:48:34.201296 1605045 logs.go:138] Found kubelet problem: Oct 07 12:43:13 old-k8s-version-130031 kubelet[658]: E1007 12:43:13.891938     658 reflector.go:138] object-"kube-system"/"coredns-token-627jj": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "coredns-token-627jj" is forbidden: User "system:node:old-k8s-version-130031" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-130031' and this object
	W1007 12:48:34.201511 1605045 logs.go:138] Found kubelet problem: Oct 07 12:43:13 old-k8s-version-130031 kubelet[658]: E1007 12:43:13.893244     658 reflector.go:138] object-"kube-system"/"kube-proxy-token-khb44": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kube-proxy-token-khb44" is forbidden: User "system:node:old-k8s-version-130031" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-130031' and this object
	W1007 12:48:34.201722 1605045 logs.go:138] Found kubelet problem: Oct 07 12:43:13 old-k8s-version-130031 kubelet[658]: E1007 12:43:13.893311     658 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:old-k8s-version-130031" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-130031' and this object
	W1007 12:48:34.201944 1605045 logs.go:138] Found kubelet problem: Oct 07 12:43:13 old-k8s-version-130031 kubelet[658]: E1007 12:43:13.893344     658 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:old-k8s-version-130031" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-130031' and this object
	W1007 12:48:34.202154 1605045 logs.go:138] Found kubelet problem: Oct 07 12:43:13 old-k8s-version-130031 kubelet[658]: E1007 12:43:13.952228     658 reflector.go:138] object-"default"/"default-token-nq6kr": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "default-token-nq6kr" is forbidden: User "system:node:old-k8s-version-130031" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'old-k8s-version-130031' and this object
	W1007 12:48:34.202374 1605045 logs.go:138] Found kubelet problem: Oct 07 12:43:13 old-k8s-version-130031 kubelet[658]: E1007 12:43:13.952337     658 reflector.go:138] object-"kube-system"/"metrics-server-token-t2mrc": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "metrics-server-token-t2mrc" is forbidden: User "system:node:old-k8s-version-130031" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-130031' and this object
	W1007 12:48:34.212449 1605045 logs.go:138] Found kubelet problem: Oct 07 12:43:15 old-k8s-version-130031 kubelet[658]: E1007 12:43:15.871648     658 pod_workers.go:191] Error syncing pod a3e21a3e-2a59-405e-9af7-0f2a4a27ed62 ("metrics-server-9975d5f86-vkrtw_kube-system(a3e21a3e-2a59-405e-9af7-0f2a4a27ed62)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W1007 12:48:34.212932 1605045 logs.go:138] Found kubelet problem: Oct 07 12:43:16 old-k8s-version-130031 kubelet[658]: E1007 12:43:16.563851     658 pod_workers.go:191] Error syncing pod a3e21a3e-2a59-405e-9af7-0f2a4a27ed62 ("metrics-server-9975d5f86-vkrtw_kube-system(a3e21a3e-2a59-405e-9af7-0f2a4a27ed62)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1007 12:48:34.216459 1605045 logs.go:138] Found kubelet problem: Oct 07 12:43:30 old-k8s-version-130031 kubelet[658]: E1007 12:43:30.401060     658 pod_workers.go:191] Error syncing pod a3e21a3e-2a59-405e-9af7-0f2a4a27ed62 ("metrics-server-9975d5f86-vkrtw_kube-system(a3e21a3e-2a59-405e-9af7-0f2a4a27ed62)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W1007 12:48:34.218593 1605045 logs.go:138] Found kubelet problem: Oct 07 12:43:44 old-k8s-version-130031 kubelet[658]: E1007 12:43:44.377295     658 pod_workers.go:191] Error syncing pod a3e21a3e-2a59-405e-9af7-0f2a4a27ed62 ("metrics-server-9975d5f86-vkrtw_kube-system(a3e21a3e-2a59-405e-9af7-0f2a4a27ed62)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1007 12:48:34.219048 1605045 logs.go:138] Found kubelet problem: Oct 07 12:43:44 old-k8s-version-130031 kubelet[658]: E1007 12:43:44.733029     658 pod_workers.go:191] Error syncing pod 65909a33-e70d-4615-8ba7-4d6c9d9f6dd7 ("dashboard-metrics-scraper-8d5bb5db8-b54bs_kubernetes-dashboard(65909a33-e70d-4615-8ba7-4d6c9d9f6dd7)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-b54bs_kubernetes-dashboard(65909a33-e70d-4615-8ba7-4d6c9d9f6dd7)"
	W1007 12:48:34.219377 1605045 logs.go:138] Found kubelet problem: Oct 07 12:43:45 old-k8s-version-130031 kubelet[658]: E1007 12:43:45.736335     658 pod_workers.go:191] Error syncing pod 65909a33-e70d-4615-8ba7-4d6c9d9f6dd7 ("dashboard-metrics-scraper-8d5bb5db8-b54bs_kubernetes-dashboard(65909a33-e70d-4615-8ba7-4d6c9d9f6dd7)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-b54bs_kubernetes-dashboard(65909a33-e70d-4615-8ba7-4d6c9d9f6dd7)"
	W1007 12:48:34.219828 1605045 logs.go:138] Found kubelet problem: Oct 07 12:43:47 old-k8s-version-130031 kubelet[658]: E1007 12:43:47.745194     658 pod_workers.go:191] Error syncing pod 2562f693-2c1c-4966-9978-9712666b4812 ("storage-provisioner_kube-system(2562f693-2c1c-4966-9978-9712666b4812)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(2562f693-2c1c-4966-9978-9712666b4812)"
	W1007 12:48:34.220493 1605045 logs.go:138] Found kubelet problem: Oct 07 12:43:53 old-k8s-version-130031 kubelet[658]: E1007 12:43:53.692850     658 pod_workers.go:191] Error syncing pod 65909a33-e70d-4615-8ba7-4d6c9d9f6dd7 ("dashboard-metrics-scraper-8d5bb5db8-b54bs_kubernetes-dashboard(65909a33-e70d-4615-8ba7-4d6c9d9f6dd7)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-b54bs_kubernetes-dashboard(65909a33-e70d-4615-8ba7-4d6c9d9f6dd7)"
	W1007 12:48:34.223067 1605045 logs.go:138] Found kubelet problem: Oct 07 12:43:59 old-k8s-version-130031 kubelet[658]: E1007 12:43:59.383291     658 pod_workers.go:191] Error syncing pod a3e21a3e-2a59-405e-9af7-0f2a4a27ed62 ("metrics-server-9975d5f86-vkrtw_kube-system(a3e21a3e-2a59-405e-9af7-0f2a4a27ed62)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W1007 12:48:34.223690 1605045 logs.go:138] Found kubelet problem: Oct 07 12:44:06 old-k8s-version-130031 kubelet[658]: E1007 12:44:06.798826     658 pod_workers.go:191] Error syncing pod 65909a33-e70d-4615-8ba7-4d6c9d9f6dd7 ("dashboard-metrics-scraper-8d5bb5db8-b54bs_kubernetes-dashboard(65909a33-e70d-4615-8ba7-4d6c9d9f6dd7)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-b54bs_kubernetes-dashboard(65909a33-e70d-4615-8ba7-4d6c9d9f6dd7)"
	W1007 12:48:34.223879 1605045 logs.go:138] Found kubelet problem: Oct 07 12:44:10 old-k8s-version-130031 kubelet[658]: E1007 12:44:10.374328     658 pod_workers.go:191] Error syncing pod a3e21a3e-2a59-405e-9af7-0f2a4a27ed62 ("metrics-server-9975d5f86-vkrtw_kube-system(a3e21a3e-2a59-405e-9af7-0f2a4a27ed62)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1007 12:48:34.224209 1605045 logs.go:138] Found kubelet problem: Oct 07 12:44:13 old-k8s-version-130031 kubelet[658]: E1007 12:44:13.693437     658 pod_workers.go:191] Error syncing pod 65909a33-e70d-4615-8ba7-4d6c9d9f6dd7 ("dashboard-metrics-scraper-8d5bb5db8-b54bs_kubernetes-dashboard(65909a33-e70d-4615-8ba7-4d6c9d9f6dd7)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-b54bs_kubernetes-dashboard(65909a33-e70d-4615-8ba7-4d6c9d9f6dd7)"
	W1007 12:48:34.224395 1605045 logs.go:138] Found kubelet problem: Oct 07 12:44:21 old-k8s-version-130031 kubelet[658]: E1007 12:44:21.373629     658 pod_workers.go:191] Error syncing pod a3e21a3e-2a59-405e-9af7-0f2a4a27ed62 ("metrics-server-9975d5f86-vkrtw_kube-system(a3e21a3e-2a59-405e-9af7-0f2a4a27ed62)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1007 12:48:34.224727 1605045 logs.go:138] Found kubelet problem: Oct 07 12:44:26 old-k8s-version-130031 kubelet[658]: E1007 12:44:26.373370     658 pod_workers.go:191] Error syncing pod 65909a33-e70d-4615-8ba7-4d6c9d9f6dd7 ("dashboard-metrics-scraper-8d5bb5db8-b54bs_kubernetes-dashboard(65909a33-e70d-4615-8ba7-4d6c9d9f6dd7)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-b54bs_kubernetes-dashboard(65909a33-e70d-4615-8ba7-4d6c9d9f6dd7)"
	W1007 12:48:34.224912 1605045 logs.go:138] Found kubelet problem: Oct 07 12:44:35 old-k8s-version-130031 kubelet[658]: E1007 12:44:35.373708     658 pod_workers.go:191] Error syncing pod a3e21a3e-2a59-405e-9af7-0f2a4a27ed62 ("metrics-server-9975d5f86-vkrtw_kube-system(a3e21a3e-2a59-405e-9af7-0f2a4a27ed62)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1007 12:48:34.225493 1605045 logs.go:138] Found kubelet problem: Oct 07 12:44:40 old-k8s-version-130031 kubelet[658]: E1007 12:44:40.884522     658 pod_workers.go:191] Error syncing pod 65909a33-e70d-4615-8ba7-4d6c9d9f6dd7 ("dashboard-metrics-scraper-8d5bb5db8-b54bs_kubernetes-dashboard(65909a33-e70d-4615-8ba7-4d6c9d9f6dd7)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-b54bs_kubernetes-dashboard(65909a33-e70d-4615-8ba7-4d6c9d9f6dd7)"
	W1007 12:48:34.225817 1605045 logs.go:138] Found kubelet problem: Oct 07 12:44:43 old-k8s-version-130031 kubelet[658]: E1007 12:44:43.693924     658 pod_workers.go:191] Error syncing pod 65909a33-e70d-4615-8ba7-4d6c9d9f6dd7 ("dashboard-metrics-scraper-8d5bb5db8-b54bs_kubernetes-dashboard(65909a33-e70d-4615-8ba7-4d6c9d9f6dd7)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-b54bs_kubernetes-dashboard(65909a33-e70d-4615-8ba7-4d6c9d9f6dd7)"
	W1007 12:48:34.228241 1605045 logs.go:138] Found kubelet problem: Oct 07 12:44:46 old-k8s-version-130031 kubelet[658]: E1007 12:44:46.384700     658 pod_workers.go:191] Error syncing pod a3e21a3e-2a59-405e-9af7-0f2a4a27ed62 ("metrics-server-9975d5f86-vkrtw_kube-system(a3e21a3e-2a59-405e-9af7-0f2a4a27ed62)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W1007 12:48:34.228566 1605045 logs.go:138] Found kubelet problem: Oct 07 12:44:57 old-k8s-version-130031 kubelet[658]: E1007 12:44:57.373721     658 pod_workers.go:191] Error syncing pod 65909a33-e70d-4615-8ba7-4d6c9d9f6dd7 ("dashboard-metrics-scraper-8d5bb5db8-b54bs_kubernetes-dashboard(65909a33-e70d-4615-8ba7-4d6c9d9f6dd7)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-b54bs_kubernetes-dashboard(65909a33-e70d-4615-8ba7-4d6c9d9f6dd7)"
	W1007 12:48:34.228750 1605045 logs.go:138] Found kubelet problem: Oct 07 12:44:59 old-k8s-version-130031 kubelet[658]: E1007 12:44:59.375281     658 pod_workers.go:191] Error syncing pod a3e21a3e-2a59-405e-9af7-0f2a4a27ed62 ("metrics-server-9975d5f86-vkrtw_kube-system(a3e21a3e-2a59-405e-9af7-0f2a4a27ed62)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1007 12:48:34.229079 1605045 logs.go:138] Found kubelet problem: Oct 07 12:45:09 old-k8s-version-130031 kubelet[658]: E1007 12:45:09.373255     658 pod_workers.go:191] Error syncing pod 65909a33-e70d-4615-8ba7-4d6c9d9f6dd7 ("dashboard-metrics-scraper-8d5bb5db8-b54bs_kubernetes-dashboard(65909a33-e70d-4615-8ba7-4d6c9d9f6dd7)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-b54bs_kubernetes-dashboard(65909a33-e70d-4615-8ba7-4d6c9d9f6dd7)"
	W1007 12:48:34.229265 1605045 logs.go:138] Found kubelet problem: Oct 07 12:45:11 old-k8s-version-130031 kubelet[658]: E1007 12:45:11.373876     658 pod_workers.go:191] Error syncing pod a3e21a3e-2a59-405e-9af7-0f2a4a27ed62 ("metrics-server-9975d5f86-vkrtw_kube-system(a3e21a3e-2a59-405e-9af7-0f2a4a27ed62)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1007 12:48:34.229846 1605045 logs.go:138] Found kubelet problem: Oct 07 12:45:22 old-k8s-version-130031 kubelet[658]: E1007 12:45:21.996110     658 pod_workers.go:191] Error syncing pod 65909a33-e70d-4615-8ba7-4d6c9d9f6dd7 ("dashboard-metrics-scraper-8d5bb5db8-b54bs_kubernetes-dashboard(65909a33-e70d-4615-8ba7-4d6c9d9f6dd7)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-b54bs_kubernetes-dashboard(65909a33-e70d-4615-8ba7-4d6c9d9f6dd7)"
	W1007 12:48:34.230218 1605045 logs.go:138] Found kubelet problem: Oct 07 12:45:23 old-k8s-version-130031 kubelet[658]: E1007 12:45:23.693664     658 pod_workers.go:191] Error syncing pod 65909a33-e70d-4615-8ba7-4d6c9d9f6dd7 ("dashboard-metrics-scraper-8d5bb5db8-b54bs_kubernetes-dashboard(65909a33-e70d-4615-8ba7-4d6c9d9f6dd7)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-b54bs_kubernetes-dashboard(65909a33-e70d-4615-8ba7-4d6c9d9f6dd7)"
	W1007 12:48:34.230407 1605045 logs.go:138] Found kubelet problem: Oct 07 12:45:26 old-k8s-version-130031 kubelet[658]: E1007 12:45:26.374341     658 pod_workers.go:191] Error syncing pod a3e21a3e-2a59-405e-9af7-0f2a4a27ed62 ("metrics-server-9975d5f86-vkrtw_kube-system(a3e21a3e-2a59-405e-9af7-0f2a4a27ed62)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1007 12:48:34.230735 1605045 logs.go:138] Found kubelet problem: Oct 07 12:45:38 old-k8s-version-130031 kubelet[658]: E1007 12:45:38.374842     658 pod_workers.go:191] Error syncing pod 65909a33-e70d-4615-8ba7-4d6c9d9f6dd7 ("dashboard-metrics-scraper-8d5bb5db8-b54bs_kubernetes-dashboard(65909a33-e70d-4615-8ba7-4d6c9d9f6dd7)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-b54bs_kubernetes-dashboard(65909a33-e70d-4615-8ba7-4d6c9d9f6dd7)"
	W1007 12:48:34.230922 1605045 logs.go:138] Found kubelet problem: Oct 07 12:45:38 old-k8s-version-130031 kubelet[658]: E1007 12:45:38.377418     658 pod_workers.go:191] Error syncing pod a3e21a3e-2a59-405e-9af7-0f2a4a27ed62 ("metrics-server-9975d5f86-vkrtw_kube-system(a3e21a3e-2a59-405e-9af7-0f2a4a27ed62)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1007 12:48:34.231245 1605045 logs.go:138] Found kubelet problem: Oct 07 12:45:50 old-k8s-version-130031 kubelet[658]: E1007 12:45:50.375115     658 pod_workers.go:191] Error syncing pod 65909a33-e70d-4615-8ba7-4d6c9d9f6dd7 ("dashboard-metrics-scraper-8d5bb5db8-b54bs_kubernetes-dashboard(65909a33-e70d-4615-8ba7-4d6c9d9f6dd7)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-b54bs_kubernetes-dashboard(65909a33-e70d-4615-8ba7-4d6c9d9f6dd7)"
	W1007 12:48:34.231431 1605045 logs.go:138] Found kubelet problem: Oct 07 12:45:50 old-k8s-version-130031 kubelet[658]: E1007 12:45:50.378298     658 pod_workers.go:191] Error syncing pod a3e21a3e-2a59-405e-9af7-0f2a4a27ed62 ("metrics-server-9975d5f86-vkrtw_kube-system(a3e21a3e-2a59-405e-9af7-0f2a4a27ed62)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1007 12:48:34.231626 1605045 logs.go:138] Found kubelet problem: Oct 07 12:46:01 old-k8s-version-130031 kubelet[658]: E1007 12:46:01.373996     658 pod_workers.go:191] Error syncing pod a3e21a3e-2a59-405e-9af7-0f2a4a27ed62 ("metrics-server-9975d5f86-vkrtw_kube-system(a3e21a3e-2a59-405e-9af7-0f2a4a27ed62)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1007 12:48:34.231965 1605045 logs.go:138] Found kubelet problem: Oct 07 12:46:03 old-k8s-version-130031 kubelet[658]: E1007 12:46:03.373238     658 pod_workers.go:191] Error syncing pod 65909a33-e70d-4615-8ba7-4d6c9d9f6dd7 ("dashboard-metrics-scraper-8d5bb5db8-b54bs_kubernetes-dashboard(65909a33-e70d-4615-8ba7-4d6c9d9f6dd7)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-b54bs_kubernetes-dashboard(65909a33-e70d-4615-8ba7-4d6c9d9f6dd7)"
	W1007 12:48:34.234414 1605045 logs.go:138] Found kubelet problem: Oct 07 12:46:14 old-k8s-version-130031 kubelet[658]: E1007 12:46:14.383215     658 pod_workers.go:191] Error syncing pod a3e21a3e-2a59-405e-9af7-0f2a4a27ed62 ("metrics-server-9975d5f86-vkrtw_kube-system(a3e21a3e-2a59-405e-9af7-0f2a4a27ed62)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W1007 12:48:34.234741 1605045 logs.go:138] Found kubelet problem: Oct 07 12:46:16 old-k8s-version-130031 kubelet[658]: E1007 12:46:16.373283     658 pod_workers.go:191] Error syncing pod 65909a33-e70d-4615-8ba7-4d6c9d9f6dd7 ("dashboard-metrics-scraper-8d5bb5db8-b54bs_kubernetes-dashboard(65909a33-e70d-4615-8ba7-4d6c9d9f6dd7)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-b54bs_kubernetes-dashboard(65909a33-e70d-4615-8ba7-4d6c9d9f6dd7)"
	W1007 12:48:34.234926 1605045 logs.go:138] Found kubelet problem: Oct 07 12:46:25 old-k8s-version-130031 kubelet[658]: E1007 12:46:25.373793     658 pod_workers.go:191] Error syncing pod a3e21a3e-2a59-405e-9af7-0f2a4a27ed62 ("metrics-server-9975d5f86-vkrtw_kube-system(a3e21a3e-2a59-405e-9af7-0f2a4a27ed62)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1007 12:48:34.235250 1605045 logs.go:138] Found kubelet problem: Oct 07 12:46:28 old-k8s-version-130031 kubelet[658]: E1007 12:46:28.373827     658 pod_workers.go:191] Error syncing pod 65909a33-e70d-4615-8ba7-4d6c9d9f6dd7 ("dashboard-metrics-scraper-8d5bb5db8-b54bs_kubernetes-dashboard(65909a33-e70d-4615-8ba7-4d6c9d9f6dd7)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-b54bs_kubernetes-dashboard(65909a33-e70d-4615-8ba7-4d6c9d9f6dd7)"
	W1007 12:48:34.235434 1605045 logs.go:138] Found kubelet problem: Oct 07 12:46:39 old-k8s-version-130031 kubelet[658]: E1007 12:46:39.373930     658 pod_workers.go:191] Error syncing pod a3e21a3e-2a59-405e-9af7-0f2a4a27ed62 ("metrics-server-9975d5f86-vkrtw_kube-system(a3e21a3e-2a59-405e-9af7-0f2a4a27ed62)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1007 12:48:34.236040 1605045 logs.go:138] Found kubelet problem: Oct 07 12:46:44 old-k8s-version-130031 kubelet[658]: E1007 12:46:44.239654     658 pod_workers.go:191] Error syncing pod 65909a33-e70d-4615-8ba7-4d6c9d9f6dd7 ("dashboard-metrics-scraper-8d5bb5db8-b54bs_kubernetes-dashboard(65909a33-e70d-4615-8ba7-4d6c9d9f6dd7)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-b54bs_kubernetes-dashboard(65909a33-e70d-4615-8ba7-4d6c9d9f6dd7)"
	W1007 12:48:34.236227 1605045 logs.go:138] Found kubelet problem: Oct 07 12:46:50 old-k8s-version-130031 kubelet[658]: E1007 12:46:50.374092     658 pod_workers.go:191] Error syncing pod a3e21a3e-2a59-405e-9af7-0f2a4a27ed62 ("metrics-server-9975d5f86-vkrtw_kube-system(a3e21a3e-2a59-405e-9af7-0f2a4a27ed62)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1007 12:48:34.236557 1605045 logs.go:138] Found kubelet problem: Oct 07 12:46:53 old-k8s-version-130031 kubelet[658]: E1007 12:46:53.693317     658 pod_workers.go:191] Error syncing pod 65909a33-e70d-4615-8ba7-4d6c9d9f6dd7 ("dashboard-metrics-scraper-8d5bb5db8-b54bs_kubernetes-dashboard(65909a33-e70d-4615-8ba7-4d6c9d9f6dd7)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-b54bs_kubernetes-dashboard(65909a33-e70d-4615-8ba7-4d6c9d9f6dd7)"
	W1007 12:48:34.236742 1605045 logs.go:138] Found kubelet problem: Oct 07 12:47:01 old-k8s-version-130031 kubelet[658]: E1007 12:47:01.373998     658 pod_workers.go:191] Error syncing pod a3e21a3e-2a59-405e-9af7-0f2a4a27ed62 ("metrics-server-9975d5f86-vkrtw_kube-system(a3e21a3e-2a59-405e-9af7-0f2a4a27ed62)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1007 12:48:34.237067 1605045 logs.go:138] Found kubelet problem: Oct 07 12:47:04 old-k8s-version-130031 kubelet[658]: E1007 12:47:04.374129     658 pod_workers.go:191] Error syncing pod 65909a33-e70d-4615-8ba7-4d6c9d9f6dd7 ("dashboard-metrics-scraper-8d5bb5db8-b54bs_kubernetes-dashboard(65909a33-e70d-4615-8ba7-4d6c9d9f6dd7)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-b54bs_kubernetes-dashboard(65909a33-e70d-4615-8ba7-4d6c9d9f6dd7)"
	W1007 12:48:34.237252 1605045 logs.go:138] Found kubelet problem: Oct 07 12:47:14 old-k8s-version-130031 kubelet[658]: E1007 12:47:14.373917     658 pod_workers.go:191] Error syncing pod a3e21a3e-2a59-405e-9af7-0f2a4a27ed62 ("metrics-server-9975d5f86-vkrtw_kube-system(a3e21a3e-2a59-405e-9af7-0f2a4a27ed62)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1007 12:48:34.237578 1605045 logs.go:138] Found kubelet problem: Oct 07 12:47:17 old-k8s-version-130031 kubelet[658]: E1007 12:47:17.373207     658 pod_workers.go:191] Error syncing pod 65909a33-e70d-4615-8ba7-4d6c9d9f6dd7 ("dashboard-metrics-scraper-8d5bb5db8-b54bs_kubernetes-dashboard(65909a33-e70d-4615-8ba7-4d6c9d9f6dd7)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-b54bs_kubernetes-dashboard(65909a33-e70d-4615-8ba7-4d6c9d9f6dd7)"
	W1007 12:48:34.237763 1605045 logs.go:138] Found kubelet problem: Oct 07 12:47:28 old-k8s-version-130031 kubelet[658]: E1007 12:47:28.373860     658 pod_workers.go:191] Error syncing pod a3e21a3e-2a59-405e-9af7-0f2a4a27ed62 ("metrics-server-9975d5f86-vkrtw_kube-system(a3e21a3e-2a59-405e-9af7-0f2a4a27ed62)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1007 12:48:34.238090 1605045 logs.go:138] Found kubelet problem: Oct 07 12:47:29 old-k8s-version-130031 kubelet[658]: E1007 12:47:29.373269     658 pod_workers.go:191] Error syncing pod 65909a33-e70d-4615-8ba7-4d6c9d9f6dd7 ("dashboard-metrics-scraper-8d5bb5db8-b54bs_kubernetes-dashboard(65909a33-e70d-4615-8ba7-4d6c9d9f6dd7)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-b54bs_kubernetes-dashboard(65909a33-e70d-4615-8ba7-4d6c9d9f6dd7)"
	W1007 12:48:34.238417 1605045 logs.go:138] Found kubelet problem: Oct 07 12:47:41 old-k8s-version-130031 kubelet[658]: E1007 12:47:41.373813     658 pod_workers.go:191] Error syncing pod 65909a33-e70d-4615-8ba7-4d6c9d9f6dd7 ("dashboard-metrics-scraper-8d5bb5db8-b54bs_kubernetes-dashboard(65909a33-e70d-4615-8ba7-4d6c9d9f6dd7)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-b54bs_kubernetes-dashboard(65909a33-e70d-4615-8ba7-4d6c9d9f6dd7)"
	W1007 12:48:34.238602 1605045 logs.go:138] Found kubelet problem: Oct 07 12:47:41 old-k8s-version-130031 kubelet[658]: E1007 12:47:41.373880     658 pod_workers.go:191] Error syncing pod a3e21a3e-2a59-405e-9af7-0f2a4a27ed62 ("metrics-server-9975d5f86-vkrtw_kube-system(a3e21a3e-2a59-405e-9af7-0f2a4a27ed62)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1007 12:48:34.238927 1605045 logs.go:138] Found kubelet problem: Oct 07 12:47:54 old-k8s-version-130031 kubelet[658]: E1007 12:47:54.374386     658 pod_workers.go:191] Error syncing pod 65909a33-e70d-4615-8ba7-4d6c9d9f6dd7 ("dashboard-metrics-scraper-8d5bb5db8-b54bs_kubernetes-dashboard(65909a33-e70d-4615-8ba7-4d6c9d9f6dd7)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-b54bs_kubernetes-dashboard(65909a33-e70d-4615-8ba7-4d6c9d9f6dd7)"
	W1007 12:48:34.239112 1605045 logs.go:138] Found kubelet problem: Oct 07 12:47:56 old-k8s-version-130031 kubelet[658]: E1007 12:47:56.373681     658 pod_workers.go:191] Error syncing pod a3e21a3e-2a59-405e-9af7-0f2a4a27ed62 ("metrics-server-9975d5f86-vkrtw_kube-system(a3e21a3e-2a59-405e-9af7-0f2a4a27ed62)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1007 12:48:34.239436 1605045 logs.go:138] Found kubelet problem: Oct 07 12:48:06 old-k8s-version-130031 kubelet[658]: E1007 12:48:06.373474     658 pod_workers.go:191] Error syncing pod 65909a33-e70d-4615-8ba7-4d6c9d9f6dd7 ("dashboard-metrics-scraper-8d5bb5db8-b54bs_kubernetes-dashboard(65909a33-e70d-4615-8ba7-4d6c9d9f6dd7)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-b54bs_kubernetes-dashboard(65909a33-e70d-4615-8ba7-4d6c9d9f6dd7)"
	W1007 12:48:34.239643 1605045 logs.go:138] Found kubelet problem: Oct 07 12:48:10 old-k8s-version-130031 kubelet[658]: E1007 12:48:10.374205     658 pod_workers.go:191] Error syncing pod a3e21a3e-2a59-405e-9af7-0f2a4a27ed62 ("metrics-server-9975d5f86-vkrtw_kube-system(a3e21a3e-2a59-405e-9af7-0f2a4a27ed62)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1007 12:48:34.239974 1605045 logs.go:138] Found kubelet problem: Oct 07 12:48:19 old-k8s-version-130031 kubelet[658]: E1007 12:48:19.373191     658 pod_workers.go:191] Error syncing pod 65909a33-e70d-4615-8ba7-4d6c9d9f6dd7 ("dashboard-metrics-scraper-8d5bb5db8-b54bs_kubernetes-dashboard(65909a33-e70d-4615-8ba7-4d6c9d9f6dd7)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-b54bs_kubernetes-dashboard(65909a33-e70d-4615-8ba7-4d6c9d9f6dd7)"
	W1007 12:48:34.240160 1605045 logs.go:138] Found kubelet problem: Oct 07 12:48:22 old-k8s-version-130031 kubelet[658]: E1007 12:48:22.377528     658 pod_workers.go:191] Error syncing pod a3e21a3e-2a59-405e-9af7-0f2a4a27ed62 ("metrics-server-9975d5f86-vkrtw_kube-system(a3e21a3e-2a59-405e-9af7-0f2a4a27ed62)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1007 12:48:34.240488 1605045 logs.go:138] Found kubelet problem: Oct 07 12:48:33 old-k8s-version-130031 kubelet[658]: E1007 12:48:33.373332     658 pod_workers.go:191] Error syncing pod 65909a33-e70d-4615-8ba7-4d6c9d9f6dd7 ("dashboard-metrics-scraper-8d5bb5db8-b54bs_kubernetes-dashboard(65909a33-e70d-4615-8ba7-4d6c9d9f6dd7)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-b54bs_kubernetes-dashboard(65909a33-e70d-4615-8ba7-4d6c9d9f6dd7)"
	I1007 12:48:34.240499 1605045 logs.go:123] Gathering logs for describe nodes ...
	I1007 12:48:34.240513 1605045 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1007 12:48:34.445314 1605045 logs.go:123] Gathering logs for kube-scheduler [8f43fed348ba1cdb7ae28e497ad28f67977db09949a112d2d17870df4a117f0a] ...
	I1007 12:48:34.445346 1605045 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8f43fed348ba1cdb7ae28e497ad28f67977db09949a112d2d17870df4a117f0a"
	I1007 12:48:34.494776 1605045 logs.go:123] Gathering logs for kube-scheduler [56445c334390ae25dcd546cbbe4c491fdc1651b3a5a4d8815c363c7a8bfbb990] ...
	I1007 12:48:34.494805 1605045 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 56445c334390ae25dcd546cbbe4c491fdc1651b3a5a4d8815c363c7a8bfbb990"
	I1007 12:48:34.547430 1605045 logs.go:123] Gathering logs for kindnet [8cf6219e2513777cac33b5c910a4793a59ef8cc60b43f7ab0383fcb48b0249b4] ...
	I1007 12:48:34.547462 1605045 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8cf6219e2513777cac33b5c910a4793a59ef8cc60b43f7ab0383fcb48b0249b4"
	I1007 12:48:34.604841 1605045 logs.go:123] Gathering logs for containerd ...
	I1007 12:48:34.604873 1605045 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1007 12:48:34.668346 1605045 logs.go:123] Gathering logs for container status ...
	I1007 12:48:34.668385 1605045 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 12:48:34.719137 1605045 logs.go:123] Gathering logs for dmesg ...
	I1007 12:48:34.719168 1605045 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 12:48:34.737624 1605045 logs.go:123] Gathering logs for etcd [1e0947c0d1d23430fec3b7273c7a245208911a9fd18651ee2ea30452df93bfdc] ...
	I1007 12:48:34.737656 1605045 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1e0947c0d1d23430fec3b7273c7a245208911a9fd18651ee2ea30452df93bfdc"
	I1007 12:48:34.794782 1605045 logs.go:123] Gathering logs for kube-proxy [e0b54b6d912b9cac2cbd658159c366466e160b149d33915f3e96ca1d509f139a] ...
	I1007 12:48:34.794815 1605045 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e0b54b6d912b9cac2cbd658159c366466e160b149d33915f3e96ca1d509f139a"
	I1007 12:48:34.834055 1605045 logs.go:123] Gathering logs for kube-proxy [1d7b57cafcedf477410702249e618a5b428f449129dfe864c4e92565b31aa7da] ...
	I1007 12:48:34.834081 1605045 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1d7b57cafcedf477410702249e618a5b428f449129dfe864c4e92565b31aa7da"
	I1007 12:48:34.876737 1605045 logs.go:123] Gathering logs for kube-controller-manager [1242dd7b1b01beccce95d3398556baedca1c8a45e6c29f05bf39e03c50f5f0ca] ...
	I1007 12:48:34.876765 1605045 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1242dd7b1b01beccce95d3398556baedca1c8a45e6c29f05bf39e03c50f5f0ca"
	I1007 12:48:34.280517 1613577 pod_ready.go:103] pod "metrics-server-6867b74b74-q6rhq" in "kube-system" namespace has status "Ready":"False"
	I1007 12:48:36.778943 1613577 pod_ready.go:103] pod "metrics-server-6867b74b74-q6rhq" in "kube-system" namespace has status "Ready":"False"
	I1007 12:48:34.934494 1605045 logs.go:123] Gathering logs for coredns [4bb2fa0793733df0e143059afecc82f6c14eff08bfc7bfdd197dd1f5722e7d55] ...
	I1007 12:48:34.934534 1605045 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4bb2fa0793733df0e143059afecc82f6c14eff08bfc7bfdd197dd1f5722e7d55"
	I1007 12:48:34.983022 1605045 logs.go:123] Gathering logs for coredns [8805bd4c8f5e006eb0f0abb45a40c3d0696ac379978061faa21852c4370a056a] ...
	I1007 12:48:34.983052 1605045 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8805bd4c8f5e006eb0f0abb45a40c3d0696ac379978061faa21852c4370a056a"
	I1007 12:48:35.030424 1605045 logs.go:123] Gathering logs for kube-controller-manager [07a060d66ad9e5daa5e626e4bb7e26d30ff2ed7a40ea150c1cd31786b35e58df] ...
	I1007 12:48:35.030499 1605045 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 07a060d66ad9e5daa5e626e4bb7e26d30ff2ed7a40ea150c1cd31786b35e58df"
	I1007 12:48:35.101750 1605045 logs.go:123] Gathering logs for kindnet [9e5fd41f9e9f5bb083bcc0a91b11894d2f6d83235dbe7b915428c7f7f5fe0fa1] ...
	I1007 12:48:35.101794 1605045 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9e5fd41f9e9f5bb083bcc0a91b11894d2f6d83235dbe7b915428c7f7f5fe0fa1"
	I1007 12:48:35.163310 1605045 logs.go:123] Gathering logs for storage-provisioner [a0fd0c4da3d885bfd3e984d4118d66727c79c42fb58e4d7d63b72a2a3ad14444] ...
	I1007 12:48:35.163344 1605045 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a0fd0c4da3d885bfd3e984d4118d66727c79c42fb58e4d7d63b72a2a3ad14444"
	I1007 12:48:35.233162 1605045 logs.go:123] Gathering logs for storage-provisioner [4e279b29e878213ed1a543c131a7d9f90f5b59daeb83257e0bf1c6513ff53468] ...
	I1007 12:48:35.233194 1605045 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4e279b29e878213ed1a543c131a7d9f90f5b59daeb83257e0bf1c6513ff53468"
	I1007 12:48:35.301675 1605045 logs.go:123] Gathering logs for kubernetes-dashboard [ee2b00422af4d052a720f617e5504040c0d7e9f2bff2f350b1c71f90cd16a1b0] ...
	I1007 12:48:35.301700 1605045 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ee2b00422af4d052a720f617e5504040c0d7e9f2bff2f350b1c71f90cd16a1b0"
	I1007 12:48:35.342492 1605045 logs.go:123] Gathering logs for kube-apiserver [c560d5c1bf2a37b5133996a47b9fd87c6142a14ff27e284676c132938457dd1a] ...
	I1007 12:48:35.342563 1605045 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c560d5c1bf2a37b5133996a47b9fd87c6142a14ff27e284676c132938457dd1a"
	I1007 12:48:35.404529 1605045 logs.go:123] Gathering logs for kube-apiserver [087491a883dc7a00d19cb0b6f1aa66d75a5fdf4285af40a17d54c064db2cfb38] ...
	I1007 12:48:35.404565 1605045 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 087491a883dc7a00d19cb0b6f1aa66d75a5fdf4285af40a17d54c064db2cfb38"
	I1007 12:48:35.470385 1605045 logs.go:123] Gathering logs for etcd [1cda9d232b1a29681f59212c1398e49986b28471693e5b8ec3e1096d4b08664b] ...
	I1007 12:48:35.470418 1605045 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1cda9d232b1a29681f59212c1398e49986b28471693e5b8ec3e1096d4b08664b"
	I1007 12:48:35.513496 1605045 out.go:358] Setting ErrFile to fd 2...
	I1007 12:48:35.513522 1605045 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1007 12:48:35.513597 1605045 out.go:270] X Problems detected in kubelet:
	W1007 12:48:35.513608 1605045 out.go:270]   Oct 07 12:48:06 old-k8s-version-130031 kubelet[658]: E1007 12:48:06.373474     658 pod_workers.go:191] Error syncing pod 65909a33-e70d-4615-8ba7-4d6c9d9f6dd7 ("dashboard-metrics-scraper-8d5bb5db8-b54bs_kubernetes-dashboard(65909a33-e70d-4615-8ba7-4d6c9d9f6dd7)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-b54bs_kubernetes-dashboard(65909a33-e70d-4615-8ba7-4d6c9d9f6dd7)"
	W1007 12:48:35.513617 1605045 out.go:270]   Oct 07 12:48:10 old-k8s-version-130031 kubelet[658]: E1007 12:48:10.374205     658 pod_workers.go:191] Error syncing pod a3e21a3e-2a59-405e-9af7-0f2a4a27ed62 ("metrics-server-9975d5f86-vkrtw_kube-system(a3e21a3e-2a59-405e-9af7-0f2a4a27ed62)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1007 12:48:35.513670 1605045 out.go:270]   Oct 07 12:48:19 old-k8s-version-130031 kubelet[658]: E1007 12:48:19.373191     658 pod_workers.go:191] Error syncing pod 65909a33-e70d-4615-8ba7-4d6c9d9f6dd7 ("dashboard-metrics-scraper-8d5bb5db8-b54bs_kubernetes-dashboard(65909a33-e70d-4615-8ba7-4d6c9d9f6dd7)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-b54bs_kubernetes-dashboard(65909a33-e70d-4615-8ba7-4d6c9d9f6dd7)"
	W1007 12:48:35.513684 1605045 out.go:270]   Oct 07 12:48:22 old-k8s-version-130031 kubelet[658]: E1007 12:48:22.377528     658 pod_workers.go:191] Error syncing pod a3e21a3e-2a59-405e-9af7-0f2a4a27ed62 ("metrics-server-9975d5f86-vkrtw_kube-system(a3e21a3e-2a59-405e-9af7-0f2a4a27ed62)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1007 12:48:35.513690 1605045 out.go:270]   Oct 07 12:48:33 old-k8s-version-130031 kubelet[658]: E1007 12:48:33.373332     658 pod_workers.go:191] Error syncing pod 65909a33-e70d-4615-8ba7-4d6c9d9f6dd7 ("dashboard-metrics-scraper-8d5bb5db8-b54bs_kubernetes-dashboard(65909a33-e70d-4615-8ba7-4d6c9d9f6dd7)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-b54bs_kubernetes-dashboard(65909a33-e70d-4615-8ba7-4d6c9d9f6dd7)"
	I1007 12:48:35.513702 1605045 out.go:358] Setting ErrFile to fd 2...
	I1007 12:48:35.513710 1605045 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 12:48:39.279091 1613577 pod_ready.go:103] pod "metrics-server-6867b74b74-q6rhq" in "kube-system" namespace has status "Ready":"False"
	I1007 12:48:41.778130 1613577 pod_ready.go:103] pod "metrics-server-6867b74b74-q6rhq" in "kube-system" namespace has status "Ready":"False"
	I1007 12:48:43.778849 1613577 pod_ready.go:103] pod "metrics-server-6867b74b74-q6rhq" in "kube-system" namespace has status "Ready":"False"
	I1007 12:48:45.780114 1613577 pod_ready.go:103] pod "metrics-server-6867b74b74-q6rhq" in "kube-system" namespace has status "Ready":"False"
	I1007 12:48:45.514839 1605045 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 12:48:45.529032 1605045 api_server.go:72] duration metric: took 5m53.378623593s to wait for apiserver process to appear ...
	I1007 12:48:45.529060 1605045 api_server.go:88] waiting for apiserver healthz status ...
	I1007 12:48:45.529095 1605045 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1007 12:48:45.529154 1605045 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1007 12:48:45.575104 1605045 cri.go:89] found id: "c560d5c1bf2a37b5133996a47b9fd87c6142a14ff27e284676c132938457dd1a"
	I1007 12:48:45.575123 1605045 cri.go:89] found id: "087491a883dc7a00d19cb0b6f1aa66d75a5fdf4285af40a17d54c064db2cfb38"
	I1007 12:48:45.575129 1605045 cri.go:89] found id: ""
	I1007 12:48:45.575135 1605045 logs.go:282] 2 containers: [c560d5c1bf2a37b5133996a47b9fd87c6142a14ff27e284676c132938457dd1a 087491a883dc7a00d19cb0b6f1aa66d75a5fdf4285af40a17d54c064db2cfb38]
	I1007 12:48:45.575192 1605045 ssh_runner.go:195] Run: which crictl
	I1007 12:48:45.578978 1605045 ssh_runner.go:195] Run: which crictl
	I1007 12:48:45.582407 1605045 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1007 12:48:45.582478 1605045 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1007 12:48:45.629320 1605045 cri.go:89] found id: "1e0947c0d1d23430fec3b7273c7a245208911a9fd18651ee2ea30452df93bfdc"
	I1007 12:48:45.629342 1605045 cri.go:89] found id: "1cda9d232b1a29681f59212c1398e49986b28471693e5b8ec3e1096d4b08664b"
	I1007 12:48:45.629347 1605045 cri.go:89] found id: ""
	I1007 12:48:45.629353 1605045 logs.go:282] 2 containers: [1e0947c0d1d23430fec3b7273c7a245208911a9fd18651ee2ea30452df93bfdc 1cda9d232b1a29681f59212c1398e49986b28471693e5b8ec3e1096d4b08664b]
	I1007 12:48:45.629409 1605045 ssh_runner.go:195] Run: which crictl
	I1007 12:48:45.633005 1605045 ssh_runner.go:195] Run: which crictl
	I1007 12:48:45.636292 1605045 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1007 12:48:45.636360 1605045 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1007 12:48:45.672536 1605045 cri.go:89] found id: "4bb2fa0793733df0e143059afecc82f6c14eff08bfc7bfdd197dd1f5722e7d55"
	I1007 12:48:45.672558 1605045 cri.go:89] found id: "8805bd4c8f5e006eb0f0abb45a40c3d0696ac379978061faa21852c4370a056a"
	I1007 12:48:45.672563 1605045 cri.go:89] found id: ""
	I1007 12:48:45.672570 1605045 logs.go:282] 2 containers: [4bb2fa0793733df0e143059afecc82f6c14eff08bfc7bfdd197dd1f5722e7d55 8805bd4c8f5e006eb0f0abb45a40c3d0696ac379978061faa21852c4370a056a]
	I1007 12:48:45.672627 1605045 ssh_runner.go:195] Run: which crictl
	I1007 12:48:45.676578 1605045 ssh_runner.go:195] Run: which crictl
	I1007 12:48:45.679886 1605045 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1007 12:48:45.679950 1605045 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1007 12:48:45.726013 1605045 cri.go:89] found id: "8f43fed348ba1cdb7ae28e497ad28f67977db09949a112d2d17870df4a117f0a"
	I1007 12:48:45.726039 1605045 cri.go:89] found id: "56445c334390ae25dcd546cbbe4c491fdc1651b3a5a4d8815c363c7a8bfbb990"
	I1007 12:48:45.726044 1605045 cri.go:89] found id: ""
	I1007 12:48:45.726053 1605045 logs.go:282] 2 containers: [8f43fed348ba1cdb7ae28e497ad28f67977db09949a112d2d17870df4a117f0a 56445c334390ae25dcd546cbbe4c491fdc1651b3a5a4d8815c363c7a8bfbb990]
	I1007 12:48:45.726108 1605045 ssh_runner.go:195] Run: which crictl
	I1007 12:48:45.729958 1605045 ssh_runner.go:195] Run: which crictl
	I1007 12:48:45.733303 1605045 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1007 12:48:45.733377 1605045 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1007 12:48:45.772251 1605045 cri.go:89] found id: "e0b54b6d912b9cac2cbd658159c366466e160b149d33915f3e96ca1d509f139a"
	I1007 12:48:45.772273 1605045 cri.go:89] found id: "1d7b57cafcedf477410702249e618a5b428f449129dfe864c4e92565b31aa7da"
	I1007 12:48:45.772278 1605045 cri.go:89] found id: ""
	I1007 12:48:45.772286 1605045 logs.go:282] 2 containers: [e0b54b6d912b9cac2cbd658159c366466e160b149d33915f3e96ca1d509f139a 1d7b57cafcedf477410702249e618a5b428f449129dfe864c4e92565b31aa7da]
	I1007 12:48:45.772341 1605045 ssh_runner.go:195] Run: which crictl
	I1007 12:48:45.777423 1605045 ssh_runner.go:195] Run: which crictl
	I1007 12:48:45.781563 1605045 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1007 12:48:45.781630 1605045 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1007 12:48:45.817672 1605045 cri.go:89] found id: "07a060d66ad9e5daa5e626e4bb7e26d30ff2ed7a40ea150c1cd31786b35e58df"
	I1007 12:48:45.817702 1605045 cri.go:89] found id: "1242dd7b1b01beccce95d3398556baedca1c8a45e6c29f05bf39e03c50f5f0ca"
	I1007 12:48:45.817706 1605045 cri.go:89] found id: ""
	I1007 12:48:45.817714 1605045 logs.go:282] 2 containers: [07a060d66ad9e5daa5e626e4bb7e26d30ff2ed7a40ea150c1cd31786b35e58df 1242dd7b1b01beccce95d3398556baedca1c8a45e6c29f05bf39e03c50f5f0ca]
	I1007 12:48:45.817770 1605045 ssh_runner.go:195] Run: which crictl
	I1007 12:48:45.821567 1605045 ssh_runner.go:195] Run: which crictl
	I1007 12:48:45.824945 1605045 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1007 12:48:45.825011 1605045 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1007 12:48:45.864398 1605045 cri.go:89] found id: "8cf6219e2513777cac33b5c910a4793a59ef8cc60b43f7ab0383fcb48b0249b4"
	I1007 12:48:45.864418 1605045 cri.go:89] found id: "9e5fd41f9e9f5bb083bcc0a91b11894d2f6d83235dbe7b915428c7f7f5fe0fa1"
	I1007 12:48:45.864422 1605045 cri.go:89] found id: ""
	I1007 12:48:45.864435 1605045 logs.go:282] 2 containers: [8cf6219e2513777cac33b5c910a4793a59ef8cc60b43f7ab0383fcb48b0249b4 9e5fd41f9e9f5bb083bcc0a91b11894d2f6d83235dbe7b915428c7f7f5fe0fa1]
	I1007 12:48:45.864491 1605045 ssh_runner.go:195] Run: which crictl
	I1007 12:48:45.868357 1605045 ssh_runner.go:195] Run: which crictl
	I1007 12:48:45.871596 1605045 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1007 12:48:45.871671 1605045 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1007 12:48:45.911632 1605045 cri.go:89] found id: "ee2b00422af4d052a720f617e5504040c0d7e9f2bff2f350b1c71f90cd16a1b0"
	I1007 12:48:45.911653 1605045 cri.go:89] found id: ""
	I1007 12:48:45.911662 1605045 logs.go:282] 1 containers: [ee2b00422af4d052a720f617e5504040c0d7e9f2bff2f350b1c71f90cd16a1b0]
	I1007 12:48:45.911716 1605045 ssh_runner.go:195] Run: which crictl
	I1007 12:48:45.915510 1605045 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1007 12:48:45.915660 1605045 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1007 12:48:45.953579 1605045 cri.go:89] found id: "a0fd0c4da3d885bfd3e984d4118d66727c79c42fb58e4d7d63b72a2a3ad14444"
	I1007 12:48:45.953604 1605045 cri.go:89] found id: "4e279b29e878213ed1a543c131a7d9f90f5b59daeb83257e0bf1c6513ff53468"
	I1007 12:48:45.953609 1605045 cri.go:89] found id: ""
	I1007 12:48:45.953616 1605045 logs.go:282] 2 containers: [a0fd0c4da3d885bfd3e984d4118d66727c79c42fb58e4d7d63b72a2a3ad14444 4e279b29e878213ed1a543c131a7d9f90f5b59daeb83257e0bf1c6513ff53468]
	I1007 12:48:45.953678 1605045 ssh_runner.go:195] Run: which crictl
	I1007 12:48:45.957566 1605045 ssh_runner.go:195] Run: which crictl
	I1007 12:48:45.961195 1605045 logs.go:123] Gathering logs for kube-scheduler [8f43fed348ba1cdb7ae28e497ad28f67977db09949a112d2d17870df4a117f0a] ...
	I1007 12:48:45.961225 1605045 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8f43fed348ba1cdb7ae28e497ad28f67977db09949a112d2d17870df4a117f0a"
	I1007 12:48:46.020263 1605045 logs.go:123] Gathering logs for kube-proxy [e0b54b6d912b9cac2cbd658159c366466e160b149d33915f3e96ca1d509f139a] ...
	I1007 12:48:46.020304 1605045 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e0b54b6d912b9cac2cbd658159c366466e160b149d33915f3e96ca1d509f139a"
	I1007 12:48:46.069672 1605045 logs.go:123] Gathering logs for kube-proxy [1d7b57cafcedf477410702249e618a5b428f449129dfe864c4e92565b31aa7da] ...
	I1007 12:48:46.069706 1605045 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1d7b57cafcedf477410702249e618a5b428f449129dfe864c4e92565b31aa7da"
	I1007 12:48:46.114842 1605045 logs.go:123] Gathering logs for kube-controller-manager [07a060d66ad9e5daa5e626e4bb7e26d30ff2ed7a40ea150c1cd31786b35e58df] ...
	I1007 12:48:46.114882 1605045 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 07a060d66ad9e5daa5e626e4bb7e26d30ff2ed7a40ea150c1cd31786b35e58df"
	I1007 12:48:46.195968 1605045 logs.go:123] Gathering logs for kubelet ...
	I1007 12:48:46.196002 1605045 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1007 12:48:46.266565 1605045 logs.go:138] Found kubelet problem: Oct 07 12:43:13 old-k8s-version-130031 kubelet[658]: E1007 12:43:13.872520     658 reflector.go:138] object-"kube-system"/"storage-provisioner-token-ktk5h": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "storage-provisioner-token-ktk5h" is forbidden: User "system:node:old-k8s-version-130031" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-130031' and this object
	W1007 12:48:46.266905 1605045 logs.go:138] Found kubelet problem: Oct 07 12:43:13 old-k8s-version-130031 kubelet[658]: E1007 12:43:13.891855     658 reflector.go:138] object-"kube-system"/"kindnet-token-srnxf": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kindnet-token-srnxf" is forbidden: User "system:node:old-k8s-version-130031" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-130031' and this object
	W1007 12:48:46.267119 1605045 logs.go:138] Found kubelet problem: Oct 07 12:43:13 old-k8s-version-130031 kubelet[658]: E1007 12:43:13.891938     658 reflector.go:138] object-"kube-system"/"coredns-token-627jj": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "coredns-token-627jj" is forbidden: User "system:node:old-k8s-version-130031" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-130031' and this object
	W1007 12:48:46.267333 1605045 logs.go:138] Found kubelet problem: Oct 07 12:43:13 old-k8s-version-130031 kubelet[658]: E1007 12:43:13.893244     658 reflector.go:138] object-"kube-system"/"kube-proxy-token-khb44": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kube-proxy-token-khb44" is forbidden: User "system:node:old-k8s-version-130031" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-130031' and this object
	W1007 12:48:46.267575 1605045 logs.go:138] Found kubelet problem: Oct 07 12:43:13 old-k8s-version-130031 kubelet[658]: E1007 12:43:13.893311     658 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:old-k8s-version-130031" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-130031' and this object
	W1007 12:48:46.267776 1605045 logs.go:138] Found kubelet problem: Oct 07 12:43:13 old-k8s-version-130031 kubelet[658]: E1007 12:43:13.893344     658 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:old-k8s-version-130031" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-130031' and this object
	W1007 12:48:46.267983 1605045 logs.go:138] Found kubelet problem: Oct 07 12:43:13 old-k8s-version-130031 kubelet[658]: E1007 12:43:13.952228     658 reflector.go:138] object-"default"/"default-token-nq6kr": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "default-token-nq6kr" is forbidden: User "system:node:old-k8s-version-130031" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'old-k8s-version-130031' and this object
	W1007 12:48:46.268201 1605045 logs.go:138] Found kubelet problem: Oct 07 12:43:13 old-k8s-version-130031 kubelet[658]: E1007 12:43:13.952337     658 reflector.go:138] object-"kube-system"/"metrics-server-token-t2mrc": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "metrics-server-token-t2mrc" is forbidden: User "system:node:old-k8s-version-130031" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-130031' and this object
	W1007 12:48:46.282208 1605045 logs.go:138] Found kubelet problem: Oct 07 12:43:15 old-k8s-version-130031 kubelet[658]: E1007 12:43:15.871648     658 pod_workers.go:191] Error syncing pod a3e21a3e-2a59-405e-9af7-0f2a4a27ed62 ("metrics-server-9975d5f86-vkrtw_kube-system(a3e21a3e-2a59-405e-9af7-0f2a4a27ed62)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W1007 12:48:46.282707 1605045 logs.go:138] Found kubelet problem: Oct 07 12:43:16 old-k8s-version-130031 kubelet[658]: E1007 12:43:16.563851     658 pod_workers.go:191] Error syncing pod a3e21a3e-2a59-405e-9af7-0f2a4a27ed62 ("metrics-server-9975d5f86-vkrtw_kube-system(a3e21a3e-2a59-405e-9af7-0f2a4a27ed62)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1007 12:48:46.286279 1605045 logs.go:138] Found kubelet problem: Oct 07 12:43:30 old-k8s-version-130031 kubelet[658]: E1007 12:43:30.401060     658 pod_workers.go:191] Error syncing pod a3e21a3e-2a59-405e-9af7-0f2a4a27ed62 ("metrics-server-9975d5f86-vkrtw_kube-system(a3e21a3e-2a59-405e-9af7-0f2a4a27ed62)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W1007 12:48:46.288534 1605045 logs.go:138] Found kubelet problem: Oct 07 12:43:44 old-k8s-version-130031 kubelet[658]: E1007 12:43:44.377295     658 pod_workers.go:191] Error syncing pod a3e21a3e-2a59-405e-9af7-0f2a4a27ed62 ("metrics-server-9975d5f86-vkrtw_kube-system(a3e21a3e-2a59-405e-9af7-0f2a4a27ed62)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1007 12:48:46.288993 1605045 logs.go:138] Found kubelet problem: Oct 07 12:43:44 old-k8s-version-130031 kubelet[658]: E1007 12:43:44.733029     658 pod_workers.go:191] Error syncing pod 65909a33-e70d-4615-8ba7-4d6c9d9f6dd7 ("dashboard-metrics-scraper-8d5bb5db8-b54bs_kubernetes-dashboard(65909a33-e70d-4615-8ba7-4d6c9d9f6dd7)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-b54bs_kubernetes-dashboard(65909a33-e70d-4615-8ba7-4d6c9d9f6dd7)"
	W1007 12:48:46.289330 1605045 logs.go:138] Found kubelet problem: Oct 07 12:43:45 old-k8s-version-130031 kubelet[658]: E1007 12:43:45.736335     658 pod_workers.go:191] Error syncing pod 65909a33-e70d-4615-8ba7-4d6c9d9f6dd7 ("dashboard-metrics-scraper-8d5bb5db8-b54bs_kubernetes-dashboard(65909a33-e70d-4615-8ba7-4d6c9d9f6dd7)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-b54bs_kubernetes-dashboard(65909a33-e70d-4615-8ba7-4d6c9d9f6dd7)"
	W1007 12:48:46.289765 1605045 logs.go:138] Found kubelet problem: Oct 07 12:43:47 old-k8s-version-130031 kubelet[658]: E1007 12:43:47.745194     658 pod_workers.go:191] Error syncing pod 2562f693-2c1c-4966-9978-9712666b4812 ("storage-provisioner_kube-system(2562f693-2c1c-4966-9978-9712666b4812)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(2562f693-2c1c-4966-9978-9712666b4812)"
	W1007 12:48:46.290420 1605045 logs.go:138] Found kubelet problem: Oct 07 12:43:53 old-k8s-version-130031 kubelet[658]: E1007 12:43:53.692850     658 pod_workers.go:191] Error syncing pod 65909a33-e70d-4615-8ba7-4d6c9d9f6dd7 ("dashboard-metrics-scraper-8d5bb5db8-b54bs_kubernetes-dashboard(65909a33-e70d-4615-8ba7-4d6c9d9f6dd7)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-b54bs_kubernetes-dashboard(65909a33-e70d-4615-8ba7-4d6c9d9f6dd7)"
	W1007 12:48:46.293037 1605045 logs.go:138] Found kubelet problem: Oct 07 12:43:59 old-k8s-version-130031 kubelet[658]: E1007 12:43:59.383291     658 pod_workers.go:191] Error syncing pod a3e21a3e-2a59-405e-9af7-0f2a4a27ed62 ("metrics-server-9975d5f86-vkrtw_kube-system(a3e21a3e-2a59-405e-9af7-0f2a4a27ed62)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W1007 12:48:46.293627 1605045 logs.go:138] Found kubelet problem: Oct 07 12:44:06 old-k8s-version-130031 kubelet[658]: E1007 12:44:06.798826     658 pod_workers.go:191] Error syncing pod 65909a33-e70d-4615-8ba7-4d6c9d9f6dd7 ("dashboard-metrics-scraper-8d5bb5db8-b54bs_kubernetes-dashboard(65909a33-e70d-4615-8ba7-4d6c9d9f6dd7)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-b54bs_kubernetes-dashboard(65909a33-e70d-4615-8ba7-4d6c9d9f6dd7)"
	W1007 12:48:46.293813 1605045 logs.go:138] Found kubelet problem: Oct 07 12:44:10 old-k8s-version-130031 kubelet[658]: E1007 12:44:10.374328     658 pod_workers.go:191] Error syncing pod a3e21a3e-2a59-405e-9af7-0f2a4a27ed62 ("metrics-server-9975d5f86-vkrtw_kube-system(a3e21a3e-2a59-405e-9af7-0f2a4a27ed62)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1007 12:48:46.294135 1605045 logs.go:138] Found kubelet problem: Oct 07 12:44:13 old-k8s-version-130031 kubelet[658]: E1007 12:44:13.693437     658 pod_workers.go:191] Error syncing pod 65909a33-e70d-4615-8ba7-4d6c9d9f6dd7 ("dashboard-metrics-scraper-8d5bb5db8-b54bs_kubernetes-dashboard(65909a33-e70d-4615-8ba7-4d6c9d9f6dd7)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-b54bs_kubernetes-dashboard(65909a33-e70d-4615-8ba7-4d6c9d9f6dd7)"
	W1007 12:48:46.294318 1605045 logs.go:138] Found kubelet problem: Oct 07 12:44:21 old-k8s-version-130031 kubelet[658]: E1007 12:44:21.373629     658 pod_workers.go:191] Error syncing pod a3e21a3e-2a59-405e-9af7-0f2a4a27ed62 ("metrics-server-9975d5f86-vkrtw_kube-system(a3e21a3e-2a59-405e-9af7-0f2a4a27ed62)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1007 12:48:46.294642 1605045 logs.go:138] Found kubelet problem: Oct 07 12:44:26 old-k8s-version-130031 kubelet[658]: E1007 12:44:26.373370     658 pod_workers.go:191] Error syncing pod 65909a33-e70d-4615-8ba7-4d6c9d9f6dd7 ("dashboard-metrics-scraper-8d5bb5db8-b54bs_kubernetes-dashboard(65909a33-e70d-4615-8ba7-4d6c9d9f6dd7)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-b54bs_kubernetes-dashboard(65909a33-e70d-4615-8ba7-4d6c9d9f6dd7)"
	W1007 12:48:46.294826 1605045 logs.go:138] Found kubelet problem: Oct 07 12:44:35 old-k8s-version-130031 kubelet[658]: E1007 12:44:35.373708     658 pod_workers.go:191] Error syncing pod a3e21a3e-2a59-405e-9af7-0f2a4a27ed62 ("metrics-server-9975d5f86-vkrtw_kube-system(a3e21a3e-2a59-405e-9af7-0f2a4a27ed62)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1007 12:48:46.295405 1605045 logs.go:138] Found kubelet problem: Oct 07 12:44:40 old-k8s-version-130031 kubelet[658]: E1007 12:44:40.884522     658 pod_workers.go:191] Error syncing pod 65909a33-e70d-4615-8ba7-4d6c9d9f6dd7 ("dashboard-metrics-scraper-8d5bb5db8-b54bs_kubernetes-dashboard(65909a33-e70d-4615-8ba7-4d6c9d9f6dd7)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-b54bs_kubernetes-dashboard(65909a33-e70d-4615-8ba7-4d6c9d9f6dd7)"
	W1007 12:48:46.295738 1605045 logs.go:138] Found kubelet problem: Oct 07 12:44:43 old-k8s-version-130031 kubelet[658]: E1007 12:44:43.693924     658 pod_workers.go:191] Error syncing pod 65909a33-e70d-4615-8ba7-4d6c9d9f6dd7 ("dashboard-metrics-scraper-8d5bb5db8-b54bs_kubernetes-dashboard(65909a33-e70d-4615-8ba7-4d6c9d9f6dd7)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-b54bs_kubernetes-dashboard(65909a33-e70d-4615-8ba7-4d6c9d9f6dd7)"
	W1007 12:48:46.298191 1605045 logs.go:138] Found kubelet problem: Oct 07 12:44:46 old-k8s-version-130031 kubelet[658]: E1007 12:44:46.384700     658 pod_workers.go:191] Error syncing pod a3e21a3e-2a59-405e-9af7-0f2a4a27ed62 ("metrics-server-9975d5f86-vkrtw_kube-system(a3e21a3e-2a59-405e-9af7-0f2a4a27ed62)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W1007 12:48:46.298518 1605045 logs.go:138] Found kubelet problem: Oct 07 12:44:57 old-k8s-version-130031 kubelet[658]: E1007 12:44:57.373721     658 pod_workers.go:191] Error syncing pod 65909a33-e70d-4615-8ba7-4d6c9d9f6dd7 ("dashboard-metrics-scraper-8d5bb5db8-b54bs_kubernetes-dashboard(65909a33-e70d-4615-8ba7-4d6c9d9f6dd7)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-b54bs_kubernetes-dashboard(65909a33-e70d-4615-8ba7-4d6c9d9f6dd7)"
	W1007 12:48:46.298703 1605045 logs.go:138] Found kubelet problem: Oct 07 12:44:59 old-k8s-version-130031 kubelet[658]: E1007 12:44:59.375281     658 pod_workers.go:191] Error syncing pod a3e21a3e-2a59-405e-9af7-0f2a4a27ed62 ("metrics-server-9975d5f86-vkrtw_kube-system(a3e21a3e-2a59-405e-9af7-0f2a4a27ed62)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1007 12:48:46.299029 1605045 logs.go:138] Found kubelet problem: Oct 07 12:45:09 old-k8s-version-130031 kubelet[658]: E1007 12:45:09.373255     658 pod_workers.go:191] Error syncing pod 65909a33-e70d-4615-8ba7-4d6c9d9f6dd7 ("dashboard-metrics-scraper-8d5bb5db8-b54bs_kubernetes-dashboard(65909a33-e70d-4615-8ba7-4d6c9d9f6dd7)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-b54bs_kubernetes-dashboard(65909a33-e70d-4615-8ba7-4d6c9d9f6dd7)"
	W1007 12:48:46.299214 1605045 logs.go:138] Found kubelet problem: Oct 07 12:45:11 old-k8s-version-130031 kubelet[658]: E1007 12:45:11.373876     658 pod_workers.go:191] Error syncing pod a3e21a3e-2a59-405e-9af7-0f2a4a27ed62 ("metrics-server-9975d5f86-vkrtw_kube-system(a3e21a3e-2a59-405e-9af7-0f2a4a27ed62)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1007 12:48:46.299803 1605045 logs.go:138] Found kubelet problem: Oct 07 12:45:22 old-k8s-version-130031 kubelet[658]: E1007 12:45:21.996110     658 pod_workers.go:191] Error syncing pod 65909a33-e70d-4615-8ba7-4d6c9d9f6dd7 ("dashboard-metrics-scraper-8d5bb5db8-b54bs_kubernetes-dashboard(65909a33-e70d-4615-8ba7-4d6c9d9f6dd7)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-b54bs_kubernetes-dashboard(65909a33-e70d-4615-8ba7-4d6c9d9f6dd7)"
	W1007 12:48:46.300128 1605045 logs.go:138] Found kubelet problem: Oct 07 12:45:23 old-k8s-version-130031 kubelet[658]: E1007 12:45:23.693664     658 pod_workers.go:191] Error syncing pod 65909a33-e70d-4615-8ba7-4d6c9d9f6dd7 ("dashboard-metrics-scraper-8d5bb5db8-b54bs_kubernetes-dashboard(65909a33-e70d-4615-8ba7-4d6c9d9f6dd7)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-b54bs_kubernetes-dashboard(65909a33-e70d-4615-8ba7-4d6c9d9f6dd7)"
	W1007 12:48:46.300311 1605045 logs.go:138] Found kubelet problem: Oct 07 12:45:26 old-k8s-version-130031 kubelet[658]: E1007 12:45:26.374341     658 pod_workers.go:191] Error syncing pod a3e21a3e-2a59-405e-9af7-0f2a4a27ed62 ("metrics-server-9975d5f86-vkrtw_kube-system(a3e21a3e-2a59-405e-9af7-0f2a4a27ed62)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1007 12:48:46.300635 1605045 logs.go:138] Found kubelet problem: Oct 07 12:45:38 old-k8s-version-130031 kubelet[658]: E1007 12:45:38.374842     658 pod_workers.go:191] Error syncing pod 65909a33-e70d-4615-8ba7-4d6c9d9f6dd7 ("dashboard-metrics-scraper-8d5bb5db8-b54bs_kubernetes-dashboard(65909a33-e70d-4615-8ba7-4d6c9d9f6dd7)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-b54bs_kubernetes-dashboard(65909a33-e70d-4615-8ba7-4d6c9d9f6dd7)"
	W1007 12:48:46.300822 1605045 logs.go:138] Found kubelet problem: Oct 07 12:45:38 old-k8s-version-130031 kubelet[658]: E1007 12:45:38.377418     658 pod_workers.go:191] Error syncing pod a3e21a3e-2a59-405e-9af7-0f2a4a27ed62 ("metrics-server-9975d5f86-vkrtw_kube-system(a3e21a3e-2a59-405e-9af7-0f2a4a27ed62)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1007 12:48:46.301152 1605045 logs.go:138] Found kubelet problem: Oct 07 12:45:50 old-k8s-version-130031 kubelet[658]: E1007 12:45:50.375115     658 pod_workers.go:191] Error syncing pod 65909a33-e70d-4615-8ba7-4d6c9d9f6dd7 ("dashboard-metrics-scraper-8d5bb5db8-b54bs_kubernetes-dashboard(65909a33-e70d-4615-8ba7-4d6c9d9f6dd7)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-b54bs_kubernetes-dashboard(65909a33-e70d-4615-8ba7-4d6c9d9f6dd7)"
	W1007 12:48:46.301335 1605045 logs.go:138] Found kubelet problem: Oct 07 12:45:50 old-k8s-version-130031 kubelet[658]: E1007 12:45:50.378298     658 pod_workers.go:191] Error syncing pod a3e21a3e-2a59-405e-9af7-0f2a4a27ed62 ("metrics-server-9975d5f86-vkrtw_kube-system(a3e21a3e-2a59-405e-9af7-0f2a4a27ed62)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1007 12:48:46.301518 1605045 logs.go:138] Found kubelet problem: Oct 07 12:46:01 old-k8s-version-130031 kubelet[658]: E1007 12:46:01.373996     658 pod_workers.go:191] Error syncing pod a3e21a3e-2a59-405e-9af7-0f2a4a27ed62 ("metrics-server-9975d5f86-vkrtw_kube-system(a3e21a3e-2a59-405e-9af7-0f2a4a27ed62)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1007 12:48:46.301842 1605045 logs.go:138] Found kubelet problem: Oct 07 12:46:03 old-k8s-version-130031 kubelet[658]: E1007 12:46:03.373238     658 pod_workers.go:191] Error syncing pod 65909a33-e70d-4615-8ba7-4d6c9d9f6dd7 ("dashboard-metrics-scraper-8d5bb5db8-b54bs_kubernetes-dashboard(65909a33-e70d-4615-8ba7-4d6c9d9f6dd7)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-b54bs_kubernetes-dashboard(65909a33-e70d-4615-8ba7-4d6c9d9f6dd7)"
	W1007 12:48:46.304301 1605045 logs.go:138] Found kubelet problem: Oct 07 12:46:14 old-k8s-version-130031 kubelet[658]: E1007 12:46:14.383215     658 pod_workers.go:191] Error syncing pod a3e21a3e-2a59-405e-9af7-0f2a4a27ed62 ("metrics-server-9975d5f86-vkrtw_kube-system(a3e21a3e-2a59-405e-9af7-0f2a4a27ed62)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W1007 12:48:46.304631 1605045 logs.go:138] Found kubelet problem: Oct 07 12:46:16 old-k8s-version-130031 kubelet[658]: E1007 12:46:16.373283     658 pod_workers.go:191] Error syncing pod 65909a33-e70d-4615-8ba7-4d6c9d9f6dd7 ("dashboard-metrics-scraper-8d5bb5db8-b54bs_kubernetes-dashboard(65909a33-e70d-4615-8ba7-4d6c9d9f6dd7)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-b54bs_kubernetes-dashboard(65909a33-e70d-4615-8ba7-4d6c9d9f6dd7)"
	W1007 12:48:46.304815 1605045 logs.go:138] Found kubelet problem: Oct 07 12:46:25 old-k8s-version-130031 kubelet[658]: E1007 12:46:25.373793     658 pod_workers.go:191] Error syncing pod a3e21a3e-2a59-405e-9af7-0f2a4a27ed62 ("metrics-server-9975d5f86-vkrtw_kube-system(a3e21a3e-2a59-405e-9af7-0f2a4a27ed62)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1007 12:48:46.305143 1605045 logs.go:138] Found kubelet problem: Oct 07 12:46:28 old-k8s-version-130031 kubelet[658]: E1007 12:46:28.373827     658 pod_workers.go:191] Error syncing pod 65909a33-e70d-4615-8ba7-4d6c9d9f6dd7 ("dashboard-metrics-scraper-8d5bb5db8-b54bs_kubernetes-dashboard(65909a33-e70d-4615-8ba7-4d6c9d9f6dd7)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-b54bs_kubernetes-dashboard(65909a33-e70d-4615-8ba7-4d6c9d9f6dd7)"
	W1007 12:48:46.305336 1605045 logs.go:138] Found kubelet problem: Oct 07 12:46:39 old-k8s-version-130031 kubelet[658]: E1007 12:46:39.373930     658 pod_workers.go:191] Error syncing pod a3e21a3e-2a59-405e-9af7-0f2a4a27ed62 ("metrics-server-9975d5f86-vkrtw_kube-system(a3e21a3e-2a59-405e-9af7-0f2a4a27ed62)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1007 12:48:46.305921 1605045 logs.go:138] Found kubelet problem: Oct 07 12:46:44 old-k8s-version-130031 kubelet[658]: E1007 12:46:44.239654     658 pod_workers.go:191] Error syncing pod 65909a33-e70d-4615-8ba7-4d6c9d9f6dd7 ("dashboard-metrics-scraper-8d5bb5db8-b54bs_kubernetes-dashboard(65909a33-e70d-4615-8ba7-4d6c9d9f6dd7)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-b54bs_kubernetes-dashboard(65909a33-e70d-4615-8ba7-4d6c9d9f6dd7)"
	W1007 12:48:46.306105 1605045 logs.go:138] Found kubelet problem: Oct 07 12:46:50 old-k8s-version-130031 kubelet[658]: E1007 12:46:50.374092     658 pod_workers.go:191] Error syncing pod a3e21a3e-2a59-405e-9af7-0f2a4a27ed62 ("metrics-server-9975d5f86-vkrtw_kube-system(a3e21a3e-2a59-405e-9af7-0f2a4a27ed62)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1007 12:48:46.306427 1605045 logs.go:138] Found kubelet problem: Oct 07 12:46:53 old-k8s-version-130031 kubelet[658]: E1007 12:46:53.693317     658 pod_workers.go:191] Error syncing pod 65909a33-e70d-4615-8ba7-4d6c9d9f6dd7 ("dashboard-metrics-scraper-8d5bb5db8-b54bs_kubernetes-dashboard(65909a33-e70d-4615-8ba7-4d6c9d9f6dd7)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-b54bs_kubernetes-dashboard(65909a33-e70d-4615-8ba7-4d6c9d9f6dd7)"
	W1007 12:48:46.306611 1605045 logs.go:138] Found kubelet problem: Oct 07 12:47:01 old-k8s-version-130031 kubelet[658]: E1007 12:47:01.373998     658 pod_workers.go:191] Error syncing pod a3e21a3e-2a59-405e-9af7-0f2a4a27ed62 ("metrics-server-9975d5f86-vkrtw_kube-system(a3e21a3e-2a59-405e-9af7-0f2a4a27ed62)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1007 12:48:46.306938 1605045 logs.go:138] Found kubelet problem: Oct 07 12:47:04 old-k8s-version-130031 kubelet[658]: E1007 12:47:04.374129     658 pod_workers.go:191] Error syncing pod 65909a33-e70d-4615-8ba7-4d6c9d9f6dd7 ("dashboard-metrics-scraper-8d5bb5db8-b54bs_kubernetes-dashboard(65909a33-e70d-4615-8ba7-4d6c9d9f6dd7)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-b54bs_kubernetes-dashboard(65909a33-e70d-4615-8ba7-4d6c9d9f6dd7)"
	W1007 12:48:46.307131 1605045 logs.go:138] Found kubelet problem: Oct 07 12:47:14 old-k8s-version-130031 kubelet[658]: E1007 12:47:14.373917     658 pod_workers.go:191] Error syncing pod a3e21a3e-2a59-405e-9af7-0f2a4a27ed62 ("metrics-server-9975d5f86-vkrtw_kube-system(a3e21a3e-2a59-405e-9af7-0f2a4a27ed62)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1007 12:48:46.307465 1605045 logs.go:138] Found kubelet problem: Oct 07 12:47:17 old-k8s-version-130031 kubelet[658]: E1007 12:47:17.373207     658 pod_workers.go:191] Error syncing pod 65909a33-e70d-4615-8ba7-4d6c9d9f6dd7 ("dashboard-metrics-scraper-8d5bb5db8-b54bs_kubernetes-dashboard(65909a33-e70d-4615-8ba7-4d6c9d9f6dd7)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-b54bs_kubernetes-dashboard(65909a33-e70d-4615-8ba7-4d6c9d9f6dd7)"
	W1007 12:48:46.307663 1605045 logs.go:138] Found kubelet problem: Oct 07 12:47:28 old-k8s-version-130031 kubelet[658]: E1007 12:47:28.373860     658 pod_workers.go:191] Error syncing pod a3e21a3e-2a59-405e-9af7-0f2a4a27ed62 ("metrics-server-9975d5f86-vkrtw_kube-system(a3e21a3e-2a59-405e-9af7-0f2a4a27ed62)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1007 12:48:46.308013 1605045 logs.go:138] Found kubelet problem: Oct 07 12:47:29 old-k8s-version-130031 kubelet[658]: E1007 12:47:29.373269     658 pod_workers.go:191] Error syncing pod 65909a33-e70d-4615-8ba7-4d6c9d9f6dd7 ("dashboard-metrics-scraper-8d5bb5db8-b54bs_kubernetes-dashboard(65909a33-e70d-4615-8ba7-4d6c9d9f6dd7)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-b54bs_kubernetes-dashboard(65909a33-e70d-4615-8ba7-4d6c9d9f6dd7)"
	W1007 12:48:46.308341 1605045 logs.go:138] Found kubelet problem: Oct 07 12:47:41 old-k8s-version-130031 kubelet[658]: E1007 12:47:41.373813     658 pod_workers.go:191] Error syncing pod 65909a33-e70d-4615-8ba7-4d6c9d9f6dd7 ("dashboard-metrics-scraper-8d5bb5db8-b54bs_kubernetes-dashboard(65909a33-e70d-4615-8ba7-4d6c9d9f6dd7)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-b54bs_kubernetes-dashboard(65909a33-e70d-4615-8ba7-4d6c9d9f6dd7)"
	W1007 12:48:46.308531 1605045 logs.go:138] Found kubelet problem: Oct 07 12:47:41 old-k8s-version-130031 kubelet[658]: E1007 12:47:41.373880     658 pod_workers.go:191] Error syncing pod a3e21a3e-2a59-405e-9af7-0f2a4a27ed62 ("metrics-server-9975d5f86-vkrtw_kube-system(a3e21a3e-2a59-405e-9af7-0f2a4a27ed62)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1007 12:48:46.308859 1605045 logs.go:138] Found kubelet problem: Oct 07 12:47:54 old-k8s-version-130031 kubelet[658]: E1007 12:47:54.374386     658 pod_workers.go:191] Error syncing pod 65909a33-e70d-4615-8ba7-4d6c9d9f6dd7 ("dashboard-metrics-scraper-8d5bb5db8-b54bs_kubernetes-dashboard(65909a33-e70d-4615-8ba7-4d6c9d9f6dd7)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-b54bs_kubernetes-dashboard(65909a33-e70d-4615-8ba7-4d6c9d9f6dd7)"
	W1007 12:48:46.309045 1605045 logs.go:138] Found kubelet problem: Oct 07 12:47:56 old-k8s-version-130031 kubelet[658]: E1007 12:47:56.373681     658 pod_workers.go:191] Error syncing pod a3e21a3e-2a59-405e-9af7-0f2a4a27ed62 ("metrics-server-9975d5f86-vkrtw_kube-system(a3e21a3e-2a59-405e-9af7-0f2a4a27ed62)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1007 12:48:46.309376 1605045 logs.go:138] Found kubelet problem: Oct 07 12:48:06 old-k8s-version-130031 kubelet[658]: E1007 12:48:06.373474     658 pod_workers.go:191] Error syncing pod 65909a33-e70d-4615-8ba7-4d6c9d9f6dd7 ("dashboard-metrics-scraper-8d5bb5db8-b54bs_kubernetes-dashboard(65909a33-e70d-4615-8ba7-4d6c9d9f6dd7)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-b54bs_kubernetes-dashboard(65909a33-e70d-4615-8ba7-4d6c9d9f6dd7)"
	W1007 12:48:46.309561 1605045 logs.go:138] Found kubelet problem: Oct 07 12:48:10 old-k8s-version-130031 kubelet[658]: E1007 12:48:10.374205     658 pod_workers.go:191] Error syncing pod a3e21a3e-2a59-405e-9af7-0f2a4a27ed62 ("metrics-server-9975d5f86-vkrtw_kube-system(a3e21a3e-2a59-405e-9af7-0f2a4a27ed62)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1007 12:48:46.309895 1605045 logs.go:138] Found kubelet problem: Oct 07 12:48:19 old-k8s-version-130031 kubelet[658]: E1007 12:48:19.373191     658 pod_workers.go:191] Error syncing pod 65909a33-e70d-4615-8ba7-4d6c9d9f6dd7 ("dashboard-metrics-scraper-8d5bb5db8-b54bs_kubernetes-dashboard(65909a33-e70d-4615-8ba7-4d6c9d9f6dd7)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-b54bs_kubernetes-dashboard(65909a33-e70d-4615-8ba7-4d6c9d9f6dd7)"
	W1007 12:48:46.310080 1605045 logs.go:138] Found kubelet problem: Oct 07 12:48:22 old-k8s-version-130031 kubelet[658]: E1007 12:48:22.377528     658 pod_workers.go:191] Error syncing pod a3e21a3e-2a59-405e-9af7-0f2a4a27ed62 ("metrics-server-9975d5f86-vkrtw_kube-system(a3e21a3e-2a59-405e-9af7-0f2a4a27ed62)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1007 12:48:46.310406 1605045 logs.go:138] Found kubelet problem: Oct 07 12:48:33 old-k8s-version-130031 kubelet[658]: E1007 12:48:33.373332     658 pod_workers.go:191] Error syncing pod 65909a33-e70d-4615-8ba7-4d6c9d9f6dd7 ("dashboard-metrics-scraper-8d5bb5db8-b54bs_kubernetes-dashboard(65909a33-e70d-4615-8ba7-4d6c9d9f6dd7)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-b54bs_kubernetes-dashboard(65909a33-e70d-4615-8ba7-4d6c9d9f6dd7)"
	W1007 12:48:46.310601 1605045 logs.go:138] Found kubelet problem: Oct 07 12:48:36 old-k8s-version-130031 kubelet[658]: E1007 12:48:36.373890     658 pod_workers.go:191] Error syncing pod a3e21a3e-2a59-405e-9af7-0f2a4a27ed62 ("metrics-server-9975d5f86-vkrtw_kube-system(a3e21a3e-2a59-405e-9af7-0f2a4a27ed62)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	I1007 12:48:46.310613 1605045 logs.go:123] Gathering logs for dmesg ...
	I1007 12:48:46.310628 1605045 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 12:48:46.330502 1605045 logs.go:123] Gathering logs for kube-apiserver [c560d5c1bf2a37b5133996a47b9fd87c6142a14ff27e284676c132938457dd1a] ...
	I1007 12:48:46.330530 1605045 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c560d5c1bf2a37b5133996a47b9fd87c6142a14ff27e284676c132938457dd1a"
	I1007 12:48:46.411082 1605045 logs.go:123] Gathering logs for etcd [1cda9d232b1a29681f59212c1398e49986b28471693e5b8ec3e1096d4b08664b] ...
	I1007 12:48:46.411116 1605045 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1cda9d232b1a29681f59212c1398e49986b28471693e5b8ec3e1096d4b08664b"
	I1007 12:48:46.461183 1605045 logs.go:123] Gathering logs for kindnet [8cf6219e2513777cac33b5c910a4793a59ef8cc60b43f7ab0383fcb48b0249b4] ...
	I1007 12:48:46.461211 1605045 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8cf6219e2513777cac33b5c910a4793a59ef8cc60b43f7ab0383fcb48b0249b4"
	I1007 12:48:46.529582 1605045 logs.go:123] Gathering logs for kube-apiserver [087491a883dc7a00d19cb0b6f1aa66d75a5fdf4285af40a17d54c064db2cfb38] ...
	I1007 12:48:46.529616 1605045 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 087491a883dc7a00d19cb0b6f1aa66d75a5fdf4285af40a17d54c064db2cfb38"
	I1007 12:48:46.591839 1605045 logs.go:123] Gathering logs for coredns [8805bd4c8f5e006eb0f0abb45a40c3d0696ac379978061faa21852c4370a056a] ...
	I1007 12:48:46.591876 1605045 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8805bd4c8f5e006eb0f0abb45a40c3d0696ac379978061faa21852c4370a056a"
	I1007 12:48:46.638143 1605045 logs.go:123] Gathering logs for kubernetes-dashboard [ee2b00422af4d052a720f617e5504040c0d7e9f2bff2f350b1c71f90cd16a1b0] ...
	I1007 12:48:46.638171 1605045 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ee2b00422af4d052a720f617e5504040c0d7e9f2bff2f350b1c71f90cd16a1b0"
	I1007 12:48:46.692586 1605045 logs.go:123] Gathering logs for storage-provisioner [4e279b29e878213ed1a543c131a7d9f90f5b59daeb83257e0bf1c6513ff53468] ...
	I1007 12:48:46.692624 1605045 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4e279b29e878213ed1a543c131a7d9f90f5b59daeb83257e0bf1c6513ff53468"
	I1007 12:48:46.731099 1605045 logs.go:123] Gathering logs for containerd ...
	I1007 12:48:46.731168 1605045 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1007 12:48:46.790385 1605045 logs.go:123] Gathering logs for container status ...
	I1007 12:48:46.790420 1605045 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 12:48:46.848918 1605045 logs.go:123] Gathering logs for kube-controller-manager [1242dd7b1b01beccce95d3398556baedca1c8a45e6c29f05bf39e03c50f5f0ca] ...
	I1007 12:48:46.848948 1605045 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1242dd7b1b01beccce95d3398556baedca1c8a45e6c29f05bf39e03c50f5f0ca"
	I1007 12:48:46.915621 1605045 logs.go:123] Gathering logs for kindnet [9e5fd41f9e9f5bb083bcc0a91b11894d2f6d83235dbe7b915428c7f7f5fe0fa1] ...
	I1007 12:48:46.915656 1605045 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9e5fd41f9e9f5bb083bcc0a91b11894d2f6d83235dbe7b915428c7f7f5fe0fa1"
	I1007 12:48:46.961161 1605045 logs.go:123] Gathering logs for storage-provisioner [a0fd0c4da3d885bfd3e984d4118d66727c79c42fb58e4d7d63b72a2a3ad14444] ...
	I1007 12:48:46.961190 1605045 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a0fd0c4da3d885bfd3e984d4118d66727c79c42fb58e4d7d63b72a2a3ad14444"
	I1007 12:48:46.998949 1605045 logs.go:123] Gathering logs for describe nodes ...
	I1007 12:48:46.999040 1605045 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1007 12:48:47.158864 1605045 logs.go:123] Gathering logs for etcd [1e0947c0d1d23430fec3b7273c7a245208911a9fd18651ee2ea30452df93bfdc] ...
	I1007 12:48:47.158897 1605045 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1e0947c0d1d23430fec3b7273c7a245208911a9fd18651ee2ea30452df93bfdc"
	I1007 12:48:47.207012 1605045 logs.go:123] Gathering logs for coredns [4bb2fa0793733df0e143059afecc82f6c14eff08bfc7bfdd197dd1f5722e7d55] ...
	I1007 12:48:47.207048 1605045 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4bb2fa0793733df0e143059afecc82f6c14eff08bfc7bfdd197dd1f5722e7d55"
	I1007 12:48:47.256247 1605045 logs.go:123] Gathering logs for kube-scheduler [56445c334390ae25dcd546cbbe4c491fdc1651b3a5a4d8815c363c7a8bfbb990] ...
	I1007 12:48:47.256273 1605045 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 56445c334390ae25dcd546cbbe4c491fdc1651b3a5a4d8815c363c7a8bfbb990"
	I1007 12:48:47.299388 1605045 out.go:358] Setting ErrFile to fd 2...
	I1007 12:48:47.299414 1605045 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1007 12:48:47.299539 1605045 out.go:270] X Problems detected in kubelet:
	W1007 12:48:47.299555 1605045 out.go:270]   Oct 07 12:48:10 old-k8s-version-130031 kubelet[658]: E1007 12:48:10.374205     658 pod_workers.go:191] Error syncing pod a3e21a3e-2a59-405e-9af7-0f2a4a27ed62 ("metrics-server-9975d5f86-vkrtw_kube-system(a3e21a3e-2a59-405e-9af7-0f2a4a27ed62)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1007 12:48:47.299575 1605045 out.go:270]   Oct 07 12:48:19 old-k8s-version-130031 kubelet[658]: E1007 12:48:19.373191     658 pod_workers.go:191] Error syncing pod 65909a33-e70d-4615-8ba7-4d6c9d9f6dd7 ("dashboard-metrics-scraper-8d5bb5db8-b54bs_kubernetes-dashboard(65909a33-e70d-4615-8ba7-4d6c9d9f6dd7)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-b54bs_kubernetes-dashboard(65909a33-e70d-4615-8ba7-4d6c9d9f6dd7)"
	W1007 12:48:47.299582 1605045 out.go:270]   Oct 07 12:48:22 old-k8s-version-130031 kubelet[658]: E1007 12:48:22.377528     658 pod_workers.go:191] Error syncing pod a3e21a3e-2a59-405e-9af7-0f2a4a27ed62 ("metrics-server-9975d5f86-vkrtw_kube-system(a3e21a3e-2a59-405e-9af7-0f2a4a27ed62)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1007 12:48:47.299593 1605045 out.go:270]   Oct 07 12:48:33 old-k8s-version-130031 kubelet[658]: E1007 12:48:33.373332     658 pod_workers.go:191] Error syncing pod 65909a33-e70d-4615-8ba7-4d6c9d9f6dd7 ("dashboard-metrics-scraper-8d5bb5db8-b54bs_kubernetes-dashboard(65909a33-e70d-4615-8ba7-4d6c9d9f6dd7)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-b54bs_kubernetes-dashboard(65909a33-e70d-4615-8ba7-4d6c9d9f6dd7)"
	W1007 12:48:47.299601 1605045 out.go:270]   Oct 07 12:48:36 old-k8s-version-130031 kubelet[658]: E1007 12:48:36.373890     658 pod_workers.go:191] Error syncing pod a3e21a3e-2a59-405e-9af7-0f2a4a27ed62 ("metrics-server-9975d5f86-vkrtw_kube-system(a3e21a3e-2a59-405e-9af7-0f2a4a27ed62)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	I1007 12:48:47.299607 1605045 out.go:358] Setting ErrFile to fd 2...
	I1007 12:48:47.299620 1605045 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 12:48:48.278358 1613577 pod_ready.go:103] pod "metrics-server-6867b74b74-q6rhq" in "kube-system" namespace has status "Ready":"False"
	I1007 12:48:50.278207 1613577 pod_ready.go:82] duration metric: took 4m0.00565489s for pod "metrics-server-6867b74b74-q6rhq" in "kube-system" namespace to be "Ready" ...
	E1007 12:48:50.278234 1613577 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I1007 12:48:50.278244 1613577 pod_ready.go:39] duration metric: took 4m0.609930509s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1007 12:48:50.278258 1613577 api_server.go:52] waiting for apiserver process to appear ...
	I1007 12:48:50.278295 1613577 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1007 12:48:50.278360 1613577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1007 12:48:50.317478 1613577 cri.go:89] found id: "019955be7bca2725110821d1ff443b485cde2e427206a18dba070d9baebd6542"
	I1007 12:48:50.317503 1613577 cri.go:89] found id: "c37386ed89b9d61b1b82675adfdf8bccd71bb807911cfdeb4dd070e5bec7775e"
	I1007 12:48:50.317509 1613577 cri.go:89] found id: ""
	I1007 12:48:50.317516 1613577 logs.go:282] 2 containers: [019955be7bca2725110821d1ff443b485cde2e427206a18dba070d9baebd6542 c37386ed89b9d61b1b82675adfdf8bccd71bb807911cfdeb4dd070e5bec7775e]
	I1007 12:48:50.317573 1613577 ssh_runner.go:195] Run: which crictl
	I1007 12:48:50.321559 1613577 ssh_runner.go:195] Run: which crictl
	I1007 12:48:50.325257 1613577 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1007 12:48:50.325344 1613577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1007 12:48:50.365977 1613577 cri.go:89] found id: "51f9b46232cedb0487d943ea636a94f2ec2ac6a7a27866732fe0f4db72017094"
	I1007 12:48:50.365997 1613577 cri.go:89] found id: "18005b336828432fa03ab428eda3e60c9f36d75d9538ca9f8d27e70dc1fe8a8d"
	I1007 12:48:50.366002 1613577 cri.go:89] found id: ""
	I1007 12:48:50.366017 1613577 logs.go:282] 2 containers: [51f9b46232cedb0487d943ea636a94f2ec2ac6a7a27866732fe0f4db72017094 18005b336828432fa03ab428eda3e60c9f36d75d9538ca9f8d27e70dc1fe8a8d]
	I1007 12:48:50.366073 1613577 ssh_runner.go:195] Run: which crictl
	I1007 12:48:50.369624 1613577 ssh_runner.go:195] Run: which crictl
	I1007 12:48:50.373112 1613577 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1007 12:48:50.373192 1613577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1007 12:48:50.415176 1613577 cri.go:89] found id: "4a09e2bed0790dff0f6bf79237749e0327e1fc74df3f3123d1b4aa8071fed1ff"
	I1007 12:48:50.415200 1613577 cri.go:89] found id: "814f75c77ac0040ccc4d3c81eb9529e4e5408b249a4158974ebf04ffa190b820"
	I1007 12:48:50.415205 1613577 cri.go:89] found id: ""
	I1007 12:48:50.415218 1613577 logs.go:282] 2 containers: [4a09e2bed0790dff0f6bf79237749e0327e1fc74df3f3123d1b4aa8071fed1ff 814f75c77ac0040ccc4d3c81eb9529e4e5408b249a4158974ebf04ffa190b820]
	I1007 12:48:50.415274 1613577 ssh_runner.go:195] Run: which crictl
	I1007 12:48:50.419018 1613577 ssh_runner.go:195] Run: which crictl
	I1007 12:48:50.423204 1613577 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1007 12:48:50.423300 1613577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1007 12:48:50.464208 1613577 cri.go:89] found id: "fee94a25d2a92571d47d33f002cdf06181a8be9c03eff2225e8a41a3dac8b0b3"
	I1007 12:48:50.464232 1613577 cri.go:89] found id: "4403ba1f0a9b607da97bd56246a70cfa89acacd6d17f0611bbd2692cb85de1a1"
	I1007 12:48:50.464237 1613577 cri.go:89] found id: ""
	I1007 12:48:50.464245 1613577 logs.go:282] 2 containers: [fee94a25d2a92571d47d33f002cdf06181a8be9c03eff2225e8a41a3dac8b0b3 4403ba1f0a9b607da97bd56246a70cfa89acacd6d17f0611bbd2692cb85de1a1]
	I1007 12:48:50.464304 1613577 ssh_runner.go:195] Run: which crictl
	I1007 12:48:50.468103 1613577 ssh_runner.go:195] Run: which crictl
	I1007 12:48:50.471518 1613577 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1007 12:48:50.471630 1613577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1007 12:48:50.514410 1613577 cri.go:89] found id: "23d8ce614c2e42a0721370037a02cedb1486987bf2bfb480c1c5fcfbdd49e4a4"
	I1007 12:48:50.514438 1613577 cri.go:89] found id: "17d55b0482e3212552be8bfecc548d5f66bf02d99a99d14de6f259c635f0766f"
	I1007 12:48:50.514444 1613577 cri.go:89] found id: ""
	I1007 12:48:50.514452 1613577 logs.go:282] 2 containers: [23d8ce614c2e42a0721370037a02cedb1486987bf2bfb480c1c5fcfbdd49e4a4 17d55b0482e3212552be8bfecc548d5f66bf02d99a99d14de6f259c635f0766f]
	I1007 12:48:50.514511 1613577 ssh_runner.go:195] Run: which crictl
	I1007 12:48:50.518354 1613577 ssh_runner.go:195] Run: which crictl
	I1007 12:48:50.521843 1613577 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1007 12:48:50.521938 1613577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1007 12:48:50.564423 1613577 cri.go:89] found id: "26a0b9473e0f10f19feb90eb43a620d3adabaa276934cf281915d541a2fc8c5e"
	I1007 12:48:50.564490 1613577 cri.go:89] found id: "b04eb391a33be20bec0f778fb1c3b542e5bd3d1353d6c6e1e9f4f0e2a033eb98"
	I1007 12:48:50.564510 1613577 cri.go:89] found id: ""
	I1007 12:48:50.564523 1613577 logs.go:282] 2 containers: [26a0b9473e0f10f19feb90eb43a620d3adabaa276934cf281915d541a2fc8c5e b04eb391a33be20bec0f778fb1c3b542e5bd3d1353d6c6e1e9f4f0e2a033eb98]
	I1007 12:48:50.564579 1613577 ssh_runner.go:195] Run: which crictl
	I1007 12:48:50.568464 1613577 ssh_runner.go:195] Run: which crictl
	I1007 12:48:50.572734 1613577 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1007 12:48:50.572823 1613577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1007 12:48:50.610320 1613577 cri.go:89] found id: "5ff5620045628f05cf6192d676729ed43db5079bdc0286584937f595488fdb19"
	I1007 12:48:50.610402 1613577 cri.go:89] found id: "36f089345e3539eecfa5eb7210487b87e8338c72f7ff12da89e3ba1d68bba809"
	I1007 12:48:50.610422 1613577 cri.go:89] found id: ""
	I1007 12:48:50.610455 1613577 logs.go:282] 2 containers: [5ff5620045628f05cf6192d676729ed43db5079bdc0286584937f595488fdb19 36f089345e3539eecfa5eb7210487b87e8338c72f7ff12da89e3ba1d68bba809]
	I1007 12:48:50.610544 1613577 ssh_runner.go:195] Run: which crictl
	I1007 12:48:50.615249 1613577 ssh_runner.go:195] Run: which crictl
	I1007 12:48:50.619101 1613577 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1007 12:48:50.619224 1613577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1007 12:48:50.656723 1613577 cri.go:89] found id: "a3289c8d8f5d5dabc9c53935a96138a4604a2bcef952812a49c8bae721a83ebd"
	I1007 12:48:50.656748 1613577 cri.go:89] found id: ""
	I1007 12:48:50.656756 1613577 logs.go:282] 1 containers: [a3289c8d8f5d5dabc9c53935a96138a4604a2bcef952812a49c8bae721a83ebd]
	I1007 12:48:50.656810 1613577 ssh_runner.go:195] Run: which crictl
	I1007 12:48:50.661129 1613577 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1007 12:48:50.661202 1613577 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1007 12:48:50.700038 1613577 cri.go:89] found id: "f91c47c47984ead8c8cbe35c97d9ad32c12f65fb3f4036e7eaa6fb0d25311d64"
	I1007 12:48:50.700061 1613577 cri.go:89] found id: "0b5db6e01688cd5728c721d6c07993b7f0121ac574662f53c371b9bb8ed8465a"
	I1007 12:48:50.700065 1613577 cri.go:89] found id: ""
	I1007 12:48:50.700073 1613577 logs.go:282] 2 containers: [f91c47c47984ead8c8cbe35c97d9ad32c12f65fb3f4036e7eaa6fb0d25311d64 0b5db6e01688cd5728c721d6c07993b7f0121ac574662f53c371b9bb8ed8465a]
	I1007 12:48:50.700127 1613577 ssh_runner.go:195] Run: which crictl
	I1007 12:48:50.710574 1613577 ssh_runner.go:195] Run: which crictl
	I1007 12:48:50.715266 1613577 logs.go:123] Gathering logs for kube-proxy [23d8ce614c2e42a0721370037a02cedb1486987bf2bfb480c1c5fcfbdd49e4a4] ...
	I1007 12:48:50.715300 1613577 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 23d8ce614c2e42a0721370037a02cedb1486987bf2bfb480c1c5fcfbdd49e4a4"
	I1007 12:48:50.762396 1613577 logs.go:123] Gathering logs for kubernetes-dashboard [a3289c8d8f5d5dabc9c53935a96138a4604a2bcef952812a49c8bae721a83ebd] ...
	I1007 12:48:50.762424 1613577 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a3289c8d8f5d5dabc9c53935a96138a4604a2bcef952812a49c8bae721a83ebd"
	I1007 12:48:50.806521 1613577 logs.go:123] Gathering logs for container status ...
	I1007 12:48:50.806548 1613577 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 12:48:50.850843 1613577 logs.go:123] Gathering logs for etcd [51f9b46232cedb0487d943ea636a94f2ec2ac6a7a27866732fe0f4db72017094] ...
	I1007 12:48:50.850908 1613577 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 51f9b46232cedb0487d943ea636a94f2ec2ac6a7a27866732fe0f4db72017094"
	I1007 12:48:50.899382 1613577 logs.go:123] Gathering logs for kube-scheduler [4403ba1f0a9b607da97bd56246a70cfa89acacd6d17f0611bbd2692cb85de1a1] ...
	I1007 12:48:50.899411 1613577 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4403ba1f0a9b607da97bd56246a70cfa89acacd6d17f0611bbd2692cb85de1a1"
	I1007 12:48:50.950150 1613577 logs.go:123] Gathering logs for kube-controller-manager [b04eb391a33be20bec0f778fb1c3b542e5bd3d1353d6c6e1e9f4f0e2a033eb98] ...
	I1007 12:48:50.950186 1613577 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b04eb391a33be20bec0f778fb1c3b542e5bd3d1353d6c6e1e9f4f0e2a033eb98"
	I1007 12:48:51.030266 1613577 logs.go:123] Gathering logs for storage-provisioner [0b5db6e01688cd5728c721d6c07993b7f0121ac574662f53c371b9bb8ed8465a] ...
	I1007 12:48:51.030301 1613577 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0b5db6e01688cd5728c721d6c07993b7f0121ac574662f53c371b9bb8ed8465a"
	I1007 12:48:51.077465 1613577 logs.go:123] Gathering logs for coredns [814f75c77ac0040ccc4d3c81eb9529e4e5408b249a4158974ebf04ffa190b820] ...
	I1007 12:48:51.077496 1613577 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 814f75c77ac0040ccc4d3c81eb9529e4e5408b249a4158974ebf04ffa190b820"
	I1007 12:48:51.132845 1613577 logs.go:123] Gathering logs for kube-proxy [17d55b0482e3212552be8bfecc548d5f66bf02d99a99d14de6f259c635f0766f] ...
	I1007 12:48:51.132876 1613577 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 17d55b0482e3212552be8bfecc548d5f66bf02d99a99d14de6f259c635f0766f"
	I1007 12:48:51.176844 1613577 logs.go:123] Gathering logs for kube-controller-manager [26a0b9473e0f10f19feb90eb43a620d3adabaa276934cf281915d541a2fc8c5e] ...
	I1007 12:48:51.176876 1613577 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 26a0b9473e0f10f19feb90eb43a620d3adabaa276934cf281915d541a2fc8c5e"
	I1007 12:48:51.268541 1613577 logs.go:123] Gathering logs for kindnet [5ff5620045628f05cf6192d676729ed43db5079bdc0286584937f595488fdb19] ...
	I1007 12:48:51.268623 1613577 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5ff5620045628f05cf6192d676729ed43db5079bdc0286584937f595488fdb19"
	I1007 12:48:51.309447 1613577 logs.go:123] Gathering logs for dmesg ...
	I1007 12:48:51.309479 1613577 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 12:48:51.326593 1613577 logs.go:123] Gathering logs for describe nodes ...
	I1007 12:48:51.326625 1613577 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1007 12:48:51.487442 1613577 logs.go:123] Gathering logs for etcd [18005b336828432fa03ab428eda3e60c9f36d75d9538ca9f8d27e70dc1fe8a8d] ...
	I1007 12:48:51.487472 1613577 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 18005b336828432fa03ab428eda3e60c9f36d75d9538ca9f8d27e70dc1fe8a8d"
	I1007 12:48:51.530510 1613577 logs.go:123] Gathering logs for coredns [4a09e2bed0790dff0f6bf79237749e0327e1fc74df3f3123d1b4aa8071fed1ff] ...
	I1007 12:48:51.530542 1613577 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4a09e2bed0790dff0f6bf79237749e0327e1fc74df3f3123d1b4aa8071fed1ff"
	I1007 12:48:51.576155 1613577 logs.go:123] Gathering logs for kindnet [36f089345e3539eecfa5eb7210487b87e8338c72f7ff12da89e3ba1d68bba809] ...
	I1007 12:48:51.576195 1613577 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 36f089345e3539eecfa5eb7210487b87e8338c72f7ff12da89e3ba1d68bba809"
	I1007 12:48:51.618401 1613577 logs.go:123] Gathering logs for containerd ...
	I1007 12:48:51.618428 1613577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1007 12:48:51.679267 1613577 logs.go:123] Gathering logs for storage-provisioner [f91c47c47984ead8c8cbe35c97d9ad32c12f65fb3f4036e7eaa6fb0d25311d64] ...
	I1007 12:48:51.679309 1613577 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f91c47c47984ead8c8cbe35c97d9ad32c12f65fb3f4036e7eaa6fb0d25311d64"
	I1007 12:48:51.720879 1613577 logs.go:123] Gathering logs for kubelet ...
	I1007 12:48:51.720906 1613577 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1007 12:48:51.764295 1613577 logs.go:138] Found kubelet problem: Oct 07 12:44:55 no-preload-842812 kubelet[662]: W1007 12:44:55.360324     662 reflector.go:561] object-"kubernetes-dashboard"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:no-preload-842812" cannot list resource "configmaps" in API group "" in the namespace "kubernetes-dashboard": no relationship found between node 'no-preload-842812' and this object
	W1007 12:48:51.764557 1613577 logs.go:138] Found kubelet problem: Oct 07 12:44:55 no-preload-842812 kubelet[662]: E1007 12:44:55.360380     662 reflector.go:158] "Unhandled Error" err="object-\"kubernetes-dashboard\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:no-preload-842812\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kubernetes-dashboard\": no relationship found between node 'no-preload-842812' and this object" logger="UnhandledError"
	I1007 12:48:51.795043 1613577 logs.go:123] Gathering logs for kube-apiserver [019955be7bca2725110821d1ff443b485cde2e427206a18dba070d9baebd6542] ...
	I1007 12:48:51.795073 1613577 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 019955be7bca2725110821d1ff443b485cde2e427206a18dba070d9baebd6542"
	I1007 12:48:51.855493 1613577 logs.go:123] Gathering logs for kube-apiserver [c37386ed89b9d61b1b82675adfdf8bccd71bb807911cfdeb4dd070e5bec7775e] ...
	I1007 12:48:51.855712 1613577 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c37386ed89b9d61b1b82675adfdf8bccd71bb807911cfdeb4dd070e5bec7775e"
	I1007 12:48:51.907716 1613577 logs.go:123] Gathering logs for kube-scheduler [fee94a25d2a92571d47d33f002cdf06181a8be9c03eff2225e8a41a3dac8b0b3] ...
	I1007 12:48:51.907750 1613577 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fee94a25d2a92571d47d33f002cdf06181a8be9c03eff2225e8a41a3dac8b0b3"
	I1007 12:48:51.948775 1613577 out.go:358] Setting ErrFile to fd 2...
	I1007 12:48:51.948804 1613577 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1007 12:48:51.948857 1613577 out.go:270] X Problems detected in kubelet:
	W1007 12:48:51.948875 1613577 out.go:270]   Oct 07 12:44:55 no-preload-842812 kubelet[662]: W1007 12:44:55.360324     662 reflector.go:561] object-"kubernetes-dashboard"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:no-preload-842812" cannot list resource "configmaps" in API group "" in the namespace "kubernetes-dashboard": no relationship found between node 'no-preload-842812' and this object
	W1007 12:48:51.948882 1613577 out.go:270]   Oct 07 12:44:55 no-preload-842812 kubelet[662]: E1007 12:44:55.360380     662 reflector.go:158] "Unhandled Error" err="object-\"kubernetes-dashboard\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:no-preload-842812\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kubernetes-dashboard\": no relationship found between node 'no-preload-842812' and this object" logger="UnhandledError"
	I1007 12:48:51.948895 1613577 out.go:358] Setting ErrFile to fd 2...
	I1007 12:48:51.948901 1613577 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 12:48:57.301471 1605045 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1007 12:48:57.310517 1605045 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1007 12:48:57.313478 1605045 out.go:201] 
	W1007 12:48:57.316110 1605045 out.go:270] X Exiting due to K8S_UNHEALTHY_CONTROL_PLANE: wait 6m0s for node: wait for healthy API server: controlPlane never updated to v1.20.0
	W1007 12:48:57.316147 1605045 out.go:270] * Suggestion: Control Plane could not update, try minikube delete --all --purge
	W1007 12:48:57.316167 1605045 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/11417
	W1007 12:48:57.316173 1605045 out.go:270] * 
	W1007 12:48:57.317058 1605045 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1007 12:48:57.320594 1605045 out.go:201] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                        ATTEMPT             POD ID              POD
	b095649356193       523cad1a4df73       2 minutes ago       Exited              dashboard-metrics-scraper   5                   9a8096691cd7b       dashboard-metrics-scraper-8d5bb5db8-b54bs
	a0fd0c4da3d88       ba04bb24b9575       5 minutes ago       Running             storage-provisioner         2                   3b1309d9a25af       storage-provisioner
	ee2b00422af4d       20b332c9a70d8       5 minutes ago       Running             kubernetes-dashboard        0                   50e04847ac8bd       kubernetes-dashboard-cd95d586-bjf22
	140d4956bc9d5       1611cd07b61d5       5 minutes ago       Running             busybox                     1                   f1b8b889db0b1       busybox
	4e279b29e8782       ba04bb24b9575       5 minutes ago       Exited              storage-provisioner         1                   3b1309d9a25af       storage-provisioner
	4bb2fa0793733       db91994f4ee8f       5 minutes ago       Running             coredns                     1                   efafd8b502536       coredns-74ff55c5b-466qx
	e0b54b6d912b9       25a5233254979       5 minutes ago       Running             kube-proxy                  1                   4a67f88e579ec       kube-proxy-zkws6
	8cf6219e25137       6a23fa8fd2b78       5 minutes ago       Running             kindnet-cni                 1                   6def523ec3f82       kindnet-d55m5
	07a060d66ad9e       1df8a2b116bd1       5 minutes ago       Running             kube-controller-manager     1                   26f210266d35a       kube-controller-manager-old-k8s-version-130031
	c560d5c1bf2a3       2c08bbbc02d3a       5 minutes ago       Running             kube-apiserver              1                   c4632dcb6d5f4       kube-apiserver-old-k8s-version-130031
	8f43fed348ba1       e7605f88f17d6       5 minutes ago       Running             kube-scheduler              1                   4f2507d96c682       kube-scheduler-old-k8s-version-130031
	1e0947c0d1d23       05b738aa1bc63       5 minutes ago       Running             etcd                        1                   78b3b692a0520       etcd-old-k8s-version-130031
	25be02a521e3d       1611cd07b61d5       6 minutes ago       Exited              busybox                     0                   9f0a4a4a7b2f8       busybox
	8805bd4c8f5e0       db91994f4ee8f       7 minutes ago       Exited              coredns                     0                   1f15fb177d810       coredns-74ff55c5b-466qx
	9e5fd41f9e9f5       6a23fa8fd2b78       7 minutes ago       Exited              kindnet-cni                 0                   8c0c0b72a2cb5       kindnet-d55m5
	1d7b57cafcedf       25a5233254979       7 minutes ago       Exited              kube-proxy                  0                   2aeb555fb126b       kube-proxy-zkws6
	087491a883dc7       2c08bbbc02d3a       8 minutes ago       Exited              kube-apiserver              0                   4e64cc747baf0       kube-apiserver-old-k8s-version-130031
	1cda9d232b1a2       05b738aa1bc63       8 minutes ago       Exited              etcd                        0                   7f60dfdd8bda5       etcd-old-k8s-version-130031
	56445c334390a       e7605f88f17d6       8 minutes ago       Exited              kube-scheduler              0                   598fceca005fe       kube-scheduler-old-k8s-version-130031
	1242dd7b1b01b       1df8a2b116bd1       8 minutes ago       Exited              kube-controller-manager     0                   39e33438070f2       kube-controller-manager-old-k8s-version-130031
	
	
	==> containerd <==
	Oct 07 12:44:46 old-k8s-version-130031 containerd[569]: time="2024-10-07T12:44:46.379771862Z" level=info msg="trying next host" error="failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host" host=fake.domain
	Oct 07 12:44:46 old-k8s-version-130031 containerd[569]: time="2024-10-07T12:44:46.381813287Z" level=error msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\" failed" error="failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	Oct 07 12:44:46 old-k8s-version-130031 containerd[569]: time="2024-10-07T12:44:46.382067336Z" level=info msg="stop pulling image fake.domain/registry.k8s.io/echoserver:1.4: active requests=0, bytes read=0"
	Oct 07 12:45:21 old-k8s-version-130031 containerd[569]: time="2024-10-07T12:45:21.375298944Z" level=info msg="CreateContainer within sandbox \"9a8096691cd7b6de2513f7dacac49d5673c9b6e568178bb479dc553208d09c2e\" for container name:\"dashboard-metrics-scraper\"  attempt:4"
	Oct 07 12:45:21 old-k8s-version-130031 containerd[569]: time="2024-10-07T12:45:21.389701151Z" level=info msg="CreateContainer within sandbox \"9a8096691cd7b6de2513f7dacac49d5673c9b6e568178bb479dc553208d09c2e\" for name:\"dashboard-metrics-scraper\"  attempt:4 returns container id \"aba7dc4bef5e3b9eaa6a2f46714b1e90fb9bc377e90d66584281905ecb833442\""
	Oct 07 12:45:21 old-k8s-version-130031 containerd[569]: time="2024-10-07T12:45:21.390470920Z" level=info msg="StartContainer for \"aba7dc4bef5e3b9eaa6a2f46714b1e90fb9bc377e90d66584281905ecb833442\""
	Oct 07 12:45:21 old-k8s-version-130031 containerd[569]: time="2024-10-07T12:45:21.459785899Z" level=info msg="StartContainer for \"aba7dc4bef5e3b9eaa6a2f46714b1e90fb9bc377e90d66584281905ecb833442\" returns successfully"
	Oct 07 12:45:21 old-k8s-version-130031 containerd[569]: time="2024-10-07T12:45:21.500231490Z" level=info msg="shim disconnected" id=aba7dc4bef5e3b9eaa6a2f46714b1e90fb9bc377e90d66584281905ecb833442 namespace=k8s.io
	Oct 07 12:45:21 old-k8s-version-130031 containerd[569]: time="2024-10-07T12:45:21.500294159Z" level=warning msg="cleaning up after shim disconnected" id=aba7dc4bef5e3b9eaa6a2f46714b1e90fb9bc377e90d66584281905ecb833442 namespace=k8s.io
	Oct 07 12:45:21 old-k8s-version-130031 containerd[569]: time="2024-10-07T12:45:21.500306926Z" level=info msg="cleaning up dead shim" namespace=k8s.io
	Oct 07 12:45:22 old-k8s-version-130031 containerd[569]: time="2024-10-07T12:45:22.004914071Z" level=info msg="RemoveContainer for \"b6ac42ad113272cc3d56260a3c88d2bb5fd2d7f4dd9c2eca3a4fde6fd591554b\""
	Oct 07 12:45:22 old-k8s-version-130031 containerd[569]: time="2024-10-07T12:45:22.011241203Z" level=info msg="RemoveContainer for \"b6ac42ad113272cc3d56260a3c88d2bb5fd2d7f4dd9c2eca3a4fde6fd591554b\" returns successfully"
	Oct 07 12:46:14 old-k8s-version-130031 containerd[569]: time="2024-10-07T12:46:14.374950453Z" level=info msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Oct 07 12:46:14 old-k8s-version-130031 containerd[569]: time="2024-10-07T12:46:14.380700296Z" level=info msg="trying next host" error="failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host" host=fake.domain
	Oct 07 12:46:14 old-k8s-version-130031 containerd[569]: time="2024-10-07T12:46:14.382640110Z" level=error msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\" failed" error="failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	Oct 07 12:46:14 old-k8s-version-130031 containerd[569]: time="2024-10-07T12:46:14.382720658Z" level=info msg="stop pulling image fake.domain/registry.k8s.io/echoserver:1.4: active requests=0, bytes read=0"
	Oct 07 12:46:43 old-k8s-version-130031 containerd[569]: time="2024-10-07T12:46:43.375370542Z" level=info msg="CreateContainer within sandbox \"9a8096691cd7b6de2513f7dacac49d5673c9b6e568178bb479dc553208d09c2e\" for container name:\"dashboard-metrics-scraper\"  attempt:5"
	Oct 07 12:46:43 old-k8s-version-130031 containerd[569]: time="2024-10-07T12:46:43.392701347Z" level=info msg="CreateContainer within sandbox \"9a8096691cd7b6de2513f7dacac49d5673c9b6e568178bb479dc553208d09c2e\" for name:\"dashboard-metrics-scraper\"  attempt:5 returns container id \"b0956493561938e16571bbd1034d1065cc42ec56d96a0118b7551c6317da12fe\""
	Oct 07 12:46:43 old-k8s-version-130031 containerd[569]: time="2024-10-07T12:46:43.393280679Z" level=info msg="StartContainer for \"b0956493561938e16571bbd1034d1065cc42ec56d96a0118b7551c6317da12fe\""
	Oct 07 12:46:43 old-k8s-version-130031 containerd[569]: time="2024-10-07T12:46:43.470126183Z" level=info msg="StartContainer for \"b0956493561938e16571bbd1034d1065cc42ec56d96a0118b7551c6317da12fe\" returns successfully"
	Oct 07 12:46:43 old-k8s-version-130031 containerd[569]: time="2024-10-07T12:46:43.496422083Z" level=info msg="shim disconnected" id=b0956493561938e16571bbd1034d1065cc42ec56d96a0118b7551c6317da12fe namespace=k8s.io
	Oct 07 12:46:43 old-k8s-version-130031 containerd[569]: time="2024-10-07T12:46:43.496489101Z" level=warning msg="cleaning up after shim disconnected" id=b0956493561938e16571bbd1034d1065cc42ec56d96a0118b7551c6317da12fe namespace=k8s.io
	Oct 07 12:46:43 old-k8s-version-130031 containerd[569]: time="2024-10-07T12:46:43.496500670Z" level=info msg="cleaning up dead shim" namespace=k8s.io
	Oct 07 12:46:44 old-k8s-version-130031 containerd[569]: time="2024-10-07T12:46:44.241209526Z" level=info msg="RemoveContainer for \"aba7dc4bef5e3b9eaa6a2f46714b1e90fb9bc377e90d66584281905ecb833442\""
	Oct 07 12:46:44 old-k8s-version-130031 containerd[569]: time="2024-10-07T12:46:44.247982677Z" level=info msg="RemoveContainer for \"aba7dc4bef5e3b9eaa6a2f46714b1e90fb9bc377e90d66584281905ecb833442\" returns successfully"
	
	
	==> coredns [4bb2fa0793733df0e143059afecc82f6c14eff08bfc7bfdd197dd1f5722e7d55] <==
	I1007 12:43:47.104356       1 trace.go:116] Trace[2019727887]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125 (started: 2024-10-07 12:43:17.103774051 +0000 UTC m=+0.105255738) (total time: 30.000464563s):
	Trace[2019727887]: [30.000464563s] [30.000464563s] END
	E1007 12:43:47.104403       1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	I1007 12:43:47.104729       1 trace.go:116] Trace[939984059]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125 (started: 2024-10-07 12:43:17.104349019 +0000 UTC m=+0.105830706) (total time: 30.00036201s):
	Trace[939984059]: [30.00036201s] [30.00036201s] END
	E1007 12:43:47.104745       1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	I1007 12:43:47.105168       1 trace.go:116] Trace[911902081]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125 (started: 2024-10-07 12:43:17.104593148 +0000 UTC m=+0.106074827) (total time: 30.00056111s):
	Trace[911902081]: [30.00056111s] [30.00056111s] END
	E1007 12:43:47.105190       1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Endpoints: Get "https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	.:53
	[INFO] plugin/reload: Running configuration MD5 = 093a0bf1423dd8c4eee62372bb216168
	CoreDNS-1.7.0
	linux/arm64, go1.14.4, f59c03d
	[INFO] 127.0.0.1:34759 - 28155 "HINFO IN 3139686897072010502.5621397433454161196. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.021823365s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [8805bd4c8f5e006eb0f0abb45a40c3d0696ac379978061faa21852c4370a056a] <==
	.:53
	[INFO] plugin/reload: Running configuration MD5 = 093a0bf1423dd8c4eee62372bb216168
	CoreDNS-1.7.0
	linux/arm64, go1.14.4, f59c03d
	[INFO] 127.0.0.1:42931 - 28234 "HINFO IN 1495640464051275466.8377945616978303034. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.013226023s
	
	
	==> describe nodes <==
	Name:               old-k8s-version-130031
	Roles:              control-plane,master
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=old-k8s-version-130031
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=55a088b4b31722f6a33d4d5d4ae6e59a42bb414b
	                    minikube.k8s.io/name=old-k8s-version-130031
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_10_07T12_41_05_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node-role.kubernetes.io/master=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 07 Oct 2024 12:41:01 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-130031
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 07 Oct 2024 12:48:56 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 07 Oct 2024 12:44:07 +0000   Mon, 07 Oct 2024 12:40:55 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 07 Oct 2024 12:44:07 +0000   Mon, 07 Oct 2024 12:40:55 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 07 Oct 2024 12:44:07 +0000   Mon, 07 Oct 2024 12:40:55 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 07 Oct 2024 12:44:07 +0000   Mon, 07 Oct 2024 12:41:20 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    old-k8s-version-130031
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 31d83e0e9ea7487f9ab50188260a164b
	  System UUID:                eca8ec8a-c470-4342-8449-40466cd87015
	  Boot ID:                    aa802e8e-7a27-4e80-bbf6-ed0c45666ec2
	  Kernel Version:             5.15.0-1070-aws
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  containerd://1.7.22
	  Kubelet Version:            v1.20.0
	  Kube-Proxy Version:         v1.20.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m38s
	  kube-system                 coredns-74ff55c5b-466qx                           100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     7m37s
	  kube-system                 etcd-old-k8s-version-130031                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         7m45s
	  kube-system                 kindnet-d55m5                                     100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      7m37s
	  kube-system                 kube-apiserver-old-k8s-version-130031             250m (12%)    0 (0%)      0 (0%)           0 (0%)         7m45s
	  kube-system                 kube-controller-manager-old-k8s-version-130031    200m (10%)    0 (0%)      0 (0%)           0 (0%)         7m45s
	  kube-system                 kube-proxy-zkws6                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m37s
	  kube-system                 kube-scheduler-old-k8s-version-130031             100m (5%)     0 (0%)      0 (0%)           0 (0%)         7m45s
	  kube-system                 metrics-server-9975d5f86-vkrtw                    100m (5%)     0 (0%)      200Mi (2%)       0 (0%)         6m25s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m36s
	  kubernetes-dashboard        dashboard-metrics-scraper-8d5bb5db8-b54bs         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m27s
	  kubernetes-dashboard        kubernetes-dashboard-cd95d586-bjf22               0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m27s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (47%)  100m (5%)
	  memory             420Mi (5%)  220Mi (2%)
	  ephemeral-storage  100Mi (0%)  0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From        Message
	  ----    ------                   ----                   ----        -------
	  Normal  NodeHasSufficientMemory  8m5s (x4 over 8m5s)    kubelet     Node old-k8s-version-130031 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m5s (x5 over 8m5s)    kubelet     Node old-k8s-version-130031 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m5s (x4 over 8m5s)    kubelet     Node old-k8s-version-130031 status is now: NodeHasSufficientPID
	  Normal  Starting                 7m46s                  kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  7m45s                  kubelet     Node old-k8s-version-130031 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m45s                  kubelet     Node old-k8s-version-130031 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m45s                  kubelet     Node old-k8s-version-130031 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  7m45s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                7m38s                  kubelet     Node old-k8s-version-130031 status is now: NodeReady
	  Normal  Starting                 7m36s                  kube-proxy  Starting kube-proxy.
	  Normal  Starting                 5m59s                  kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  5m58s (x8 over 5m58s)  kubelet     Node old-k8s-version-130031 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m58s (x7 over 5m58s)  kubelet     Node old-k8s-version-130031 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m58s (x8 over 5m58s)  kubelet     Node old-k8s-version-130031 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m58s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  Starting                 5m41s                  kube-proxy  Starting kube-proxy.
	
	
	==> dmesg <==
	
	
	==> etcd [1cda9d232b1a29681f59212c1398e49986b28471693e5b8ec3e1096d4b08664b] <==
	2024-10-07 12:40:54.710084 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 
	2024-10-07 12:40:54.710439 I | embed: listening for metrics on http://127.0.0.1:2381
	2024-10-07 12:40:54.710642 I | embed: listening for peers on 192.168.85.2:2380
	raft2024/10/07 12:40:54 INFO: 9f0758e1c58a86ed is starting a new election at term 1
	raft2024/10/07 12:40:54 INFO: 9f0758e1c58a86ed became candidate at term 2
	raft2024/10/07 12:40:54 INFO: 9f0758e1c58a86ed received MsgVoteResp from 9f0758e1c58a86ed at term 2
	raft2024/10/07 12:40:54 INFO: 9f0758e1c58a86ed became leader at term 2
	raft2024/10/07 12:40:54 INFO: raft.node: 9f0758e1c58a86ed elected leader 9f0758e1c58a86ed at term 2
	2024-10-07 12:40:54.972213 I | etcdserver: setting up the initial cluster version to 3.4
	2024-10-07 12:40:54.973168 I | etcdserver: published {Name:old-k8s-version-130031 ClientURLs:[https://192.168.85.2:2379]} to cluster 68eaea490fab4e05
	2024-10-07 12:40:54.973408 I | embed: ready to serve client requests
	2024-10-07 12:40:54.974904 I | embed: serving client requests on 192.168.85.2:2379
	2024-10-07 12:40:54.987579 N | etcdserver/membership: set the initial cluster version to 3.4
	2024-10-07 12:40:54.987851 I | embed: ready to serve client requests
	2024-10-07 12:40:54.988035 I | etcdserver/api: enabled capabilities for version 3.4
	2024-10-07 12:40:54.993936 I | embed: serving client requests on 127.0.0.1:2379
	2024-10-07 12:41:19.981050 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-10-07 12:41:21.426202 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-10-07 12:41:31.426168 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-10-07 12:41:41.426243 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-10-07 12:41:51.426174 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-10-07 12:42:01.426003 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-10-07 12:42:11.425988 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-10-07 12:42:21.426243 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-10-07 12:42:31.426058 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	
	
	==> etcd [1e0947c0d1d23430fec3b7273c7a245208911a9fd18651ee2ea30452df93bfdc] <==
	2024-10-07 12:44:55.220727 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-10-07 12:45:05.220875 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-10-07 12:45:15.220914 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-10-07 12:45:25.220782 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-10-07 12:45:35.220697 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-10-07 12:45:45.221364 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-10-07 12:45:55.220681 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-10-07 12:46:05.220620 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-10-07 12:46:15.220706 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-10-07 12:46:25.224304 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-10-07 12:46:35.220702 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-10-07 12:46:45.220993 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-10-07 12:46:55.220859 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-10-07 12:47:05.220958 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-10-07 12:47:15.220706 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-10-07 12:47:25.220745 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-10-07 12:47:35.220885 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-10-07 12:47:45.221243 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-10-07 12:47:55.220821 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-10-07 12:48:05.220582 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-10-07 12:48:15.220807 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-10-07 12:48:25.220576 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-10-07 12:48:35.223825 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-10-07 12:48:45.221531 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-10-07 12:48:55.220761 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	
	
	==> kernel <==
	 12:48:59 up 1 day,  2:31,  0 users,  load average: 0.76, 2.09, 2.55
	Linux old-k8s-version-130031 5.15.0-1070-aws #76~20.04.1-Ubuntu SMP Mon Sep 2 12:20:48 UTC 2024 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kindnet [8cf6219e2513777cac33b5c910a4793a59ef8cc60b43f7ab0383fcb48b0249b4] <==
	I1007 12:46:56.698930       1 main.go:299] handling current node
	I1007 12:47:06.706024       1 main.go:295] Handling node with IPs: map[192.168.85.2:{}]
	I1007 12:47:06.706059       1 main.go:299] handling current node
	I1007 12:47:16.698939       1 main.go:295] Handling node with IPs: map[192.168.85.2:{}]
	I1007 12:47:16.698976       1 main.go:299] handling current node
	I1007 12:47:26.698574       1 main.go:295] Handling node with IPs: map[192.168.85.2:{}]
	I1007 12:47:26.698610       1 main.go:299] handling current node
	I1007 12:47:36.704447       1 main.go:295] Handling node with IPs: map[192.168.85.2:{}]
	I1007 12:47:36.704483       1 main.go:299] handling current node
	I1007 12:47:46.703813       1 main.go:295] Handling node with IPs: map[192.168.85.2:{}]
	I1007 12:47:46.703910       1 main.go:299] handling current node
	I1007 12:47:56.697993       1 main.go:295] Handling node with IPs: map[192.168.85.2:{}]
	I1007 12:47:56.698027       1 main.go:299] handling current node
	I1007 12:48:06.704437       1 main.go:295] Handling node with IPs: map[192.168.85.2:{}]
	I1007 12:48:06.704470       1 main.go:299] handling current node
	I1007 12:48:16.698511       1 main.go:295] Handling node with IPs: map[192.168.85.2:{}]
	I1007 12:48:16.698761       1 main.go:299] handling current node
	I1007 12:48:26.704449       1 main.go:295] Handling node with IPs: map[192.168.85.2:{}]
	I1007 12:48:26.704485       1 main.go:299] handling current node
	I1007 12:48:36.704731       1 main.go:295] Handling node with IPs: map[192.168.85.2:{}]
	I1007 12:48:36.704949       1 main.go:299] handling current node
	I1007 12:48:46.705384       1 main.go:295] Handling node with IPs: map[192.168.85.2:{}]
	I1007 12:48:46.705416       1 main.go:299] handling current node
	I1007 12:48:56.703486       1 main.go:295] Handling node with IPs: map[192.168.85.2:{}]
	I1007 12:48:56.703524       1 main.go:299] handling current node
	
	
	==> kindnet [9e5fd41f9e9f5bb083bcc0a91b11894d2f6d83235dbe7b915428c7f7f5fe0fa1] <==
	I1007 12:41:24.097194       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1007 12:41:24.097758       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1007 12:41:24.098008       1 main.go:148] setting mtu 1500 for CNI 
	I1007 12:41:24.098030       1 main.go:178] kindnetd IP family: "ipv4"
	I1007 12:41:24.098106       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	I1007 12:41:24.496068       1 controller.go:334] Starting controller kube-network-policies
	I1007 12:41:24.496275       1 controller.go:338] Waiting for informer caches to sync
	I1007 12:41:24.496564       1 shared_informer.go:313] Waiting for caches to sync for kube-network-policies
	I1007 12:41:24.697274       1 shared_informer.go:320] Caches are synced for kube-network-policies
	I1007 12:41:24.697381       1 metrics.go:61] Registering metrics
	I1007 12:41:24.697521       1 controller.go:374] Syncing nftables rules
	I1007 12:41:34.505330       1 main.go:295] Handling node with IPs: map[192.168.85.2:{}]
	I1007 12:41:34.505392       1 main.go:299] handling current node
	I1007 12:41:44.495950       1 main.go:295] Handling node with IPs: map[192.168.85.2:{}]
	I1007 12:41:44.495985       1 main.go:299] handling current node
	I1007 12:41:54.503645       1 main.go:295] Handling node with IPs: map[192.168.85.2:{}]
	I1007 12:41:54.503702       1 main.go:299] handling current node
	I1007 12:42:04.503892       1 main.go:295] Handling node with IPs: map[192.168.85.2:{}]
	I1007 12:42:04.504111       1 main.go:299] handling current node
	I1007 12:42:14.496611       1 main.go:295] Handling node with IPs: map[192.168.85.2:{}]
	I1007 12:42:14.496808       1 main.go:299] handling current node
	I1007 12:42:24.495948       1 main.go:295] Handling node with IPs: map[192.168.85.2:{}]
	I1007 12:42:24.495981       1 main.go:299] handling current node
	
	
	==> kube-apiserver [087491a883dc7a00d19cb0b6f1aa66d75a5fdf4285af40a17d54c064db2cfb38] <==
	I1007 12:41:01.667691       1 shared_informer.go:247] Caches are synced for crd-autoregister 
	I1007 12:41:01.683700       1 shared_informer.go:247] Caches are synced for node_authorizer 
	I1007 12:41:01.704841       1 controller.go:606] quota admission added evaluator for: namespaces
	I1007 12:41:02.332369       1 controller.go:132] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
	I1007 12:41:02.332401       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I1007 12:41:02.338477       1 storage_scheduling.go:132] created PriorityClass system-node-critical with value 2000001000
	I1007 12:41:02.342341       1 storage_scheduling.go:132] created PriorityClass system-cluster-critical with value 2000000000
	I1007 12:41:02.342367       1 storage_scheduling.go:148] all system priority classes are created successfully or already exist.
	I1007 12:41:02.876145       1 controller.go:606] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1007 12:41:02.918630       1 controller.go:606] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	W1007 12:41:03.007184       1 lease.go:233] Resetting endpoints for master service "kubernetes" to [192.168.85.2]
	I1007 12:41:03.008644       1 controller.go:606] quota admission added evaluator for: endpoints
	I1007 12:41:03.020230       1 controller.go:606] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1007 12:41:04.082379       1 controller.go:606] quota admission added evaluator for: serviceaccounts
	I1007 12:41:04.549283       1 controller.go:606] quota admission added evaluator for: deployments.apps
	I1007 12:41:04.612397       1 controller.go:606] quota admission added evaluator for: daemonsets.apps
	I1007 12:41:13.030380       1 controller.go:606] quota admission added evaluator for: leases.coordination.k8s.io
	I1007 12:41:20.926640       1 controller.go:606] quota admission added evaluator for: replicasets.apps
	I1007 12:41:21.052137       1 controller.go:606] quota admission added evaluator for: controllerrevisions.apps
	I1007 12:41:40.522612       1 client.go:360] parsed scheme: "passthrough"
	I1007 12:41:40.522658       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I1007 12:41:40.522669       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I1007 12:42:22.956558       1 client.go:360] parsed scheme: "passthrough"
	I1007 12:42:22.956603       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I1007 12:42:22.956613       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	
	
	==> kube-apiserver [c560d5c1bf2a37b5133996a47b9fd87c6142a14ff27e284676c132938457dd1a] <==
	I1007 12:45:02.938376       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I1007 12:45:02.938386       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I1007 12:45:40.146229       1 client.go:360] parsed scheme: "passthrough"
	I1007 12:45:40.146282       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I1007 12:45:40.146291       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	W1007 12:46:16.597748       1 handler_proxy.go:102] no RequestInfo found in the context
	E1007 12:46:16.597822       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1007 12:46:16.597836       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1007 12:46:17.840529       1 client.go:360] parsed scheme: "passthrough"
	I1007 12:46:17.840586       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I1007 12:46:17.840594       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I1007 12:47:00.163821       1 client.go:360] parsed scheme: "passthrough"
	I1007 12:47:00.163867       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I1007 12:47:00.163877       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I1007 12:47:42.724235       1 client.go:360] parsed scheme: "passthrough"
	I1007 12:47:42.724281       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I1007 12:47:42.724291       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	W1007 12:48:14.929629       1 handler_proxy.go:102] no RequestInfo found in the context
	E1007 12:48:14.929839       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1007 12:48:14.929857       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1007 12:48:25.940358       1 client.go:360] parsed scheme: "passthrough"
	I1007 12:48:25.940591       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I1007 12:48:25.940611       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	
	
	==> kube-controller-manager [07a060d66ad9e5daa5e626e4bb7e26d30ff2ed7a40ea150c1cd31786b35e58df] <==
	W1007 12:44:37.329338       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1007 12:45:03.358146       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I1007 12:45:08.979807       1 request.go:655] Throttling request took 1.048511788s, request: GET:https://192.168.85.2:8443/apis/rbac.authorization.k8s.io/v1beta1?timeout=32s
	W1007 12:45:09.831301       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1007 12:45:33.860099       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I1007 12:45:41.481872       1 request.go:655] Throttling request took 1.048336386s, request: GET:https://192.168.85.2:8443/apis/rbac.authorization.k8s.io/v1?timeout=32s
	W1007 12:45:42.333330       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1007 12:46:04.361933       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I1007 12:46:13.983672       1 request.go:655] Throttling request took 1.048480574s, request: GET:https://192.168.85.2:8443/apis/extensions/v1beta1?timeout=32s
	W1007 12:46:14.835030       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1007 12:46:34.863789       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I1007 12:46:46.485478       1 request.go:655] Throttling request took 1.048612979s, request: GET:https://192.168.85.2:8443/apis/autoscaling/v1?timeout=32s
	W1007 12:46:47.337168       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1007 12:47:05.366070       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I1007 12:47:18.987728       1 request.go:655] Throttling request took 1.048454052s, request: GET:https://192.168.85.2:8443/apis/scheduling.k8s.io/v1beta1?timeout=32s
	W1007 12:47:19.839270       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1007 12:47:35.873962       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I1007 12:47:51.489735       1 request.go:655] Throttling request took 1.048526724s, request: GET:https://192.168.85.2:8443/apis/authentication.k8s.io/v1beta1?timeout=32s
	W1007 12:47:52.341142       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1007 12:48:06.378073       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I1007 12:48:23.991520       1 request.go:655] Throttling request took 1.048434403s, request: GET:https://192.168.85.2:8443/apis/certificates.k8s.io/v1?timeout=32s
	W1007 12:48:24.843202       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1007 12:48:36.880446       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I1007 12:48:56.493746       1 request.go:655] Throttling request took 1.048377427s, request: GET:https://192.168.85.2:8443/apis/certificates.k8s.io/v1beta1?timeout=32s
	W1007 12:48:57.348604       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	
	
	==> kube-controller-manager [1242dd7b1b01beccce95d3398556baedca1c8a45e6c29f05bf39e03c50f5f0ca] <==
	I1007 12:41:20.957230       1 taint_manager.go:187] Starting NoExecuteTaintManager
	I1007 12:41:20.960859       1 event.go:291] "Event occurred" object="old-k8s-version-130031" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node old-k8s-version-130031 event: Registered Node old-k8s-version-130031 in Controller"
	I1007 12:41:20.962735       1 shared_informer.go:247] Caches are synced for expand 
	I1007 12:41:21.009749       1 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-74ff55c5b to 2"
	I1007 12:41:21.037193       1 event.go:291] "Event occurred" object="kube-system/kube-apiserver-old-k8s-version-130031" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I1007 12:41:21.037234       1 event.go:291] "Event occurred" object="kube-system/coredns-74ff55c5b" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-74ff55c5b-466qx"
	I1007 12:41:21.049610       1 shared_informer.go:247] Caches are synced for namespace 
	I1007 12:41:21.049907       1 event.go:291] "Event occurred" object="kube-system/coredns-74ff55c5b" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-74ff55c5b-llbvs"
	I1007 12:41:21.081520       1 shared_informer.go:247] Caches are synced for service account 
	I1007 12:41:21.081786       1 shared_informer.go:247] Caches are synced for bootstrap_signer 
	I1007 12:41:21.126379       1 event.go:291] "Event occurred" object="kube-system/kube-proxy" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-zkws6"
	I1007 12:41:21.128436       1 shared_informer.go:247] Caches are synced for crt configmap 
	I1007 12:41:21.141685       1 shared_informer.go:247] Caches are synced for resource quota 
	I1007 12:41:21.155709       1 shared_informer.go:247] Caches are synced for job 
	I1007 12:41:21.174130       1 event.go:291] "Event occurred" object="kube-system/kindnet" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-d55m5"
	I1007 12:41:21.191034       1 shared_informer.go:247] Caches are synced for resource quota 
	E1007 12:41:21.266379       1 daemon_controller.go:320] kube-system/kube-proxy failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kube-proxy", GenerateName:"", Namespace:"kube-system", SelfLink:"", UID:"75a518b7-990d-45f2-800c-f9850031f084", ResourceVersion:"264", Generation:1, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63863901664, loc:(*time.Location)(0x632eb80)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string{"deprecated.daemonset.template.generation":"1"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kubeadm", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0x4001691de0), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0x4001691e00)}}}, Spec:v1.DaemonSetSpec{Selector:(*v1.
LabelSelector)(0x4001691e20), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"kube-proxy", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.Gl
usterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(0x4001a12080), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"xtables-lock", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0x4001691
e40), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeS
ource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"lib-modules", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0x4001691e60), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil),
AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"kube-proxy", Image:"k8s.gcr.io/kube-proxy:v1.20.0", Command:[]string{"/usr/local/bin/kube-proxy", "--config=/var/lib/kube-proxy/config.conf", "--hostname-override=$(NODE_NAME)"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"NODE_NAME", Value:"", ValueFrom:(*v1.EnvVarSource)(0x4001691ea0)}}, Resources:v1.R
esourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"kube-proxy", ReadOnly:false, MountPath:"/var/lib/kube-proxy", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"xtables-lock", ReadOnly:false, MountPath:"/run/xtables.lock", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"lib-modules", ReadOnly:true, MountPath:"/lib/modules", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(0x400160b560), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPo
licy:"Always", TerminationGracePeriodSeconds:(*int64)(0x4000d509e8), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string{"kubernetes.io/os":"linux"}, ServiceAccountName:"kube-proxy", DeprecatedServiceAccount:"kube-proxy", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0x4000b19420), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"CriticalAddonsOnly", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)(nil)}, v1.Toleration{Key:"", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)(nil)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"system-node-critical", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), Runtime
ClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:"RollingUpdate", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0x4000663a20)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0x4000d50a58)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:0, NumberMisscheduled:0, DesiredNumberScheduled:0, NumberReady:0, ObservedGeneration:0, UpdatedNumberScheduled:0, NumberAvailable:0, NumberUnavailable:0, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps "kube-proxy": the object has been modified; please apply your changes to the latest version and try again
	I1007 12:41:21.325872       1 shared_informer.go:240] Waiting for caches to sync for garbage collector
	I1007 12:41:21.615732       1 shared_informer.go:247] Caches are synced for garbage collector 
	I1007 12:41:21.615756       1 garbagecollector.go:151] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I1007 12:41:21.630535       1 shared_informer.go:247] Caches are synced for garbage collector 
	I1007 12:41:22.646861       1 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-74ff55c5b to 1"
	I1007 12:41:22.690817       1 event.go:291] "Event occurred" object="kube-system/coredns-74ff55c5b" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-74ff55c5b-llbvs"
	I1007 12:41:25.956464       1 node_lifecycle_controller.go:1222] Controller detected that some Nodes are Ready. Exiting master disruption mode.
	I1007 12:42:31.984330       1 event.go:291] "Event occurred" object="kube-system/metrics-server" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set metrics-server-9975d5f86 to 1"
	
	
	==> kube-proxy [1d7b57cafcedf477410702249e618a5b428f449129dfe864c4e92565b31aa7da] <==
	I1007 12:41:21.997637       1 node.go:172] Successfully retrieved node IP: 192.168.85.2
	I1007 12:41:21.997727       1 server_others.go:142] kube-proxy node IP is an IPv4 address (192.168.85.2), assume IPv4 operation
	W1007 12:41:22.033369       1 server_others.go:578] Unknown proxy mode "", assuming iptables proxy
	I1007 12:41:22.033469       1 server_others.go:185] Using iptables Proxier.
	I1007 12:41:22.033712       1 server.go:650] Version: v1.20.0
	I1007 12:41:22.034375       1 config.go:315] Starting service config controller
	I1007 12:41:22.034394       1 shared_informer.go:240] Waiting for caches to sync for service config
	I1007 12:41:22.057290       1 config.go:224] Starting endpoint slice config controller
	I1007 12:41:22.057312       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	I1007 12:41:22.135630       1 shared_informer.go:247] Caches are synced for service config 
	I1007 12:41:22.164168       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	
	
	==> kube-proxy [e0b54b6d912b9cac2cbd658159c366466e160b149d33915f3e96ca1d509f139a] <==
	I1007 12:43:17.262409       1 node.go:172] Successfully retrieved node IP: 192.168.85.2
	I1007 12:43:17.262490       1 server_others.go:142] kube-proxy node IP is an IPv4 address (192.168.85.2), assume IPv4 operation
	W1007 12:43:17.368116       1 server_others.go:578] Unknown proxy mode "", assuming iptables proxy
	I1007 12:43:17.368353       1 server_others.go:185] Using iptables Proxier.
	I1007 12:43:17.368707       1 server.go:650] Version: v1.20.0
	I1007 12:43:17.370465       1 config.go:315] Starting service config controller
	I1007 12:43:17.370485       1 shared_informer.go:240] Waiting for caches to sync for service config
	I1007 12:43:17.370506       1 config.go:224] Starting endpoint slice config controller
	I1007 12:43:17.370510       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	I1007 12:43:17.470662       1 shared_informer.go:247] Caches are synced for service config 
	I1007 12:43:17.470613       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	
	
	==> kube-scheduler [56445c334390ae25dcd546cbbe4c491fdc1651b3a5a4d8815c363c7a8bfbb990] <==
	W1007 12:41:01.549094       1 authentication.go:332] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1007 12:41:01.549124       1 authentication.go:333] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1007 12:41:01.549160       1 authentication.go:334] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1007 12:41:01.635889       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1007 12:41:01.635919       1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1007 12:41:01.638675       1 secure_serving.go:197] Serving securely on 127.0.0.1:10259
	I1007 12:41:01.639055       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	E1007 12:41:01.669350       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1007 12:41:01.669797       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1007 12:41:01.685479       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1007 12:41:01.685958       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1007 12:41:01.686339       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1007 12:41:01.686587       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1007 12:41:01.686994       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1007 12:41:01.687317       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1007 12:41:01.687381       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1007 12:41:01.687458       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1007 12:41:01.687834       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1007 12:41:01.692754       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1007 12:41:02.545478       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1007 12:41:02.587380       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1007 12:41:02.649758       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1007 12:41:02.698780       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1007 12:41:02.840264       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I1007 12:41:05.836140       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	
	==> kube-scheduler [8f43fed348ba1cdb7ae28e497ad28f67977db09949a112d2d17870df4a117f0a] <==
	I1007 12:43:07.594785       1 serving.go:331] Generated self-signed cert in-memory
	W1007 12:43:13.786694       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1007 12:43:13.786735       1 authentication.go:332] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1007 12:43:13.786743       1 authentication.go:333] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1007 12:43:13.786751       1 authentication.go:334] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1007 12:43:13.968427       1 secure_serving.go:197] Serving securely on 127.0.0.1:10259
	I1007 12:43:13.968768       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1007 12:43:13.968864       1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1007 12:43:13.968963       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	I1007 12:43:14.071511       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	
	==> kubelet <==
	Oct 07 12:47:04 old-k8s-version-130031 kubelet[658]: E1007 12:47:04.374129     658 pod_workers.go:191] Error syncing pod 65909a33-e70d-4615-8ba7-4d6c9d9f6dd7 ("dashboard-metrics-scraper-8d5bb5db8-b54bs_kubernetes-dashboard(65909a33-e70d-4615-8ba7-4d6c9d9f6dd7)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-b54bs_kubernetes-dashboard(65909a33-e70d-4615-8ba7-4d6c9d9f6dd7)"
	Oct 07 12:47:14 old-k8s-version-130031 kubelet[658]: E1007 12:47:14.373917     658 pod_workers.go:191] Error syncing pod a3e21a3e-2a59-405e-9af7-0f2a4a27ed62 ("metrics-server-9975d5f86-vkrtw_kube-system(a3e21a3e-2a59-405e-9af7-0f2a4a27ed62)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Oct 07 12:47:17 old-k8s-version-130031 kubelet[658]: I1007 12:47:17.372829     658 scope.go:95] [topologymanager] RemoveContainer - Container ID: b0956493561938e16571bbd1034d1065cc42ec56d96a0118b7551c6317da12fe
	Oct 07 12:47:17 old-k8s-version-130031 kubelet[658]: E1007 12:47:17.373207     658 pod_workers.go:191] Error syncing pod 65909a33-e70d-4615-8ba7-4d6c9d9f6dd7 ("dashboard-metrics-scraper-8d5bb5db8-b54bs_kubernetes-dashboard(65909a33-e70d-4615-8ba7-4d6c9d9f6dd7)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-b54bs_kubernetes-dashboard(65909a33-e70d-4615-8ba7-4d6c9d9f6dd7)"
	Oct 07 12:47:28 old-k8s-version-130031 kubelet[658]: E1007 12:47:28.373860     658 pod_workers.go:191] Error syncing pod a3e21a3e-2a59-405e-9af7-0f2a4a27ed62 ("metrics-server-9975d5f86-vkrtw_kube-system(a3e21a3e-2a59-405e-9af7-0f2a4a27ed62)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Oct 07 12:47:29 old-k8s-version-130031 kubelet[658]: I1007 12:47:29.372913     658 scope.go:95] [topologymanager] RemoveContainer - Container ID: b0956493561938e16571bbd1034d1065cc42ec56d96a0118b7551c6317da12fe
	Oct 07 12:47:29 old-k8s-version-130031 kubelet[658]: E1007 12:47:29.373269     658 pod_workers.go:191] Error syncing pod 65909a33-e70d-4615-8ba7-4d6c9d9f6dd7 ("dashboard-metrics-scraper-8d5bb5db8-b54bs_kubernetes-dashboard(65909a33-e70d-4615-8ba7-4d6c9d9f6dd7)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-b54bs_kubernetes-dashboard(65909a33-e70d-4615-8ba7-4d6c9d9f6dd7)"
	Oct 07 12:47:41 old-k8s-version-130031 kubelet[658]: I1007 12:47:41.372892     658 scope.go:95] [topologymanager] RemoveContainer - Container ID: b0956493561938e16571bbd1034d1065cc42ec56d96a0118b7551c6317da12fe
	Oct 07 12:47:41 old-k8s-version-130031 kubelet[658]: E1007 12:47:41.373813     658 pod_workers.go:191] Error syncing pod 65909a33-e70d-4615-8ba7-4d6c9d9f6dd7 ("dashboard-metrics-scraper-8d5bb5db8-b54bs_kubernetes-dashboard(65909a33-e70d-4615-8ba7-4d6c9d9f6dd7)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-b54bs_kubernetes-dashboard(65909a33-e70d-4615-8ba7-4d6c9d9f6dd7)"
	Oct 07 12:47:41 old-k8s-version-130031 kubelet[658]: E1007 12:47:41.373880     658 pod_workers.go:191] Error syncing pod a3e21a3e-2a59-405e-9af7-0f2a4a27ed62 ("metrics-server-9975d5f86-vkrtw_kube-system(a3e21a3e-2a59-405e-9af7-0f2a4a27ed62)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Oct 07 12:47:54 old-k8s-version-130031 kubelet[658]: I1007 12:47:54.373570     658 scope.go:95] [topologymanager] RemoveContainer - Container ID: b0956493561938e16571bbd1034d1065cc42ec56d96a0118b7551c6317da12fe
	Oct 07 12:47:54 old-k8s-version-130031 kubelet[658]: E1007 12:47:54.374386     658 pod_workers.go:191] Error syncing pod 65909a33-e70d-4615-8ba7-4d6c9d9f6dd7 ("dashboard-metrics-scraper-8d5bb5db8-b54bs_kubernetes-dashboard(65909a33-e70d-4615-8ba7-4d6c9d9f6dd7)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-b54bs_kubernetes-dashboard(65909a33-e70d-4615-8ba7-4d6c9d9f6dd7)"
	Oct 07 12:47:56 old-k8s-version-130031 kubelet[658]: E1007 12:47:56.373681     658 pod_workers.go:191] Error syncing pod a3e21a3e-2a59-405e-9af7-0f2a4a27ed62 ("metrics-server-9975d5f86-vkrtw_kube-system(a3e21a3e-2a59-405e-9af7-0f2a4a27ed62)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Oct 07 12:48:06 old-k8s-version-130031 kubelet[658]: I1007 12:48:06.373054     658 scope.go:95] [topologymanager] RemoveContainer - Container ID: b0956493561938e16571bbd1034d1065cc42ec56d96a0118b7551c6317da12fe
	Oct 07 12:48:06 old-k8s-version-130031 kubelet[658]: E1007 12:48:06.373474     658 pod_workers.go:191] Error syncing pod 65909a33-e70d-4615-8ba7-4d6c9d9f6dd7 ("dashboard-metrics-scraper-8d5bb5db8-b54bs_kubernetes-dashboard(65909a33-e70d-4615-8ba7-4d6c9d9f6dd7)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-b54bs_kubernetes-dashboard(65909a33-e70d-4615-8ba7-4d6c9d9f6dd7)"
	Oct 07 12:48:10 old-k8s-version-130031 kubelet[658]: E1007 12:48:10.374205     658 pod_workers.go:191] Error syncing pod a3e21a3e-2a59-405e-9af7-0f2a4a27ed62 ("metrics-server-9975d5f86-vkrtw_kube-system(a3e21a3e-2a59-405e-9af7-0f2a4a27ed62)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Oct 07 12:48:19 old-k8s-version-130031 kubelet[658]: I1007 12:48:19.372848     658 scope.go:95] [topologymanager] RemoveContainer - Container ID: b0956493561938e16571bbd1034d1065cc42ec56d96a0118b7551c6317da12fe
	Oct 07 12:48:19 old-k8s-version-130031 kubelet[658]: E1007 12:48:19.373191     658 pod_workers.go:191] Error syncing pod 65909a33-e70d-4615-8ba7-4d6c9d9f6dd7 ("dashboard-metrics-scraper-8d5bb5db8-b54bs_kubernetes-dashboard(65909a33-e70d-4615-8ba7-4d6c9d9f6dd7)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-b54bs_kubernetes-dashboard(65909a33-e70d-4615-8ba7-4d6c9d9f6dd7)"
	Oct 07 12:48:22 old-k8s-version-130031 kubelet[658]: E1007 12:48:22.377528     658 pod_workers.go:191] Error syncing pod a3e21a3e-2a59-405e-9af7-0f2a4a27ed62 ("metrics-server-9975d5f86-vkrtw_kube-system(a3e21a3e-2a59-405e-9af7-0f2a4a27ed62)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Oct 07 12:48:33 old-k8s-version-130031 kubelet[658]: I1007 12:48:33.372947     658 scope.go:95] [topologymanager] RemoveContainer - Container ID: b0956493561938e16571bbd1034d1065cc42ec56d96a0118b7551c6317da12fe
	Oct 07 12:48:33 old-k8s-version-130031 kubelet[658]: E1007 12:48:33.373332     658 pod_workers.go:191] Error syncing pod 65909a33-e70d-4615-8ba7-4d6c9d9f6dd7 ("dashboard-metrics-scraper-8d5bb5db8-b54bs_kubernetes-dashboard(65909a33-e70d-4615-8ba7-4d6c9d9f6dd7)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-b54bs_kubernetes-dashboard(65909a33-e70d-4615-8ba7-4d6c9d9f6dd7)"
	Oct 07 12:48:36 old-k8s-version-130031 kubelet[658]: E1007 12:48:36.373890     658 pod_workers.go:191] Error syncing pod a3e21a3e-2a59-405e-9af7-0f2a4a27ed62 ("metrics-server-9975d5f86-vkrtw_kube-system(a3e21a3e-2a59-405e-9af7-0f2a4a27ed62)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Oct 07 12:48:48 old-k8s-version-130031 kubelet[658]: I1007 12:48:48.373015     658 scope.go:95] [topologymanager] RemoveContainer - Container ID: b0956493561938e16571bbd1034d1065cc42ec56d96a0118b7551c6317da12fe
	Oct 07 12:48:48 old-k8s-version-130031 kubelet[658]: E1007 12:48:48.373383     658 pod_workers.go:191] Error syncing pod 65909a33-e70d-4615-8ba7-4d6c9d9f6dd7 ("dashboard-metrics-scraper-8d5bb5db8-b54bs_kubernetes-dashboard(65909a33-e70d-4615-8ba7-4d6c9d9f6dd7)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-b54bs_kubernetes-dashboard(65909a33-e70d-4615-8ba7-4d6c9d9f6dd7)"
	Oct 07 12:48:50 old-k8s-version-130031 kubelet[658]: E1007 12:48:50.374735     658 pod_workers.go:191] Error syncing pod a3e21a3e-2a59-405e-9af7-0f2a4a27ed62 ("metrics-server-9975d5f86-vkrtw_kube-system(a3e21a3e-2a59-405e-9af7-0f2a4a27ed62)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	
	
	==> kubernetes-dashboard [ee2b00422af4d052a720f617e5504040c0d7e9f2bff2f350b1c71f90cd16a1b0] <==
	2024/10/07 12:43:37 Using namespace: kubernetes-dashboard
	2024/10/07 12:43:37 Using in-cluster config to connect to apiserver
	2024/10/07 12:43:37 Using secret token for csrf signing
	2024/10/07 12:43:37 Initializing csrf token from kubernetes-dashboard-csrf secret
	2024/10/07 12:43:37 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2024/10/07 12:43:37 Successful initial request to the apiserver, version: v1.20.0
	2024/10/07 12:43:37 Generating JWE encryption key
	2024/10/07 12:43:37 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2024/10/07 12:43:37 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2024/10/07 12:43:37 Initializing JWE encryption key from synchronized object
	2024/10/07 12:43:37 Creating in-cluster Sidecar client
	2024/10/07 12:43:37 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/10/07 12:43:37 Serving insecurely on HTTP port: 9090
	2024/10/07 12:44:07 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/10/07 12:44:37 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/10/07 12:45:07 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/10/07 12:45:37 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/10/07 12:46:07 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/10/07 12:46:37 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/10/07 12:47:07 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/10/07 12:47:37 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/10/07 12:48:07 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/10/07 12:48:37 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/10/07 12:43:37 Starting overwatch
	
	
	==> storage-provisioner [4e279b29e878213ed1a543c131a7d9f90f5b59daeb83257e0bf1c6513ff53468] <==
	I1007 12:43:17.359303       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1007 12:43:47.361895       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [a0fd0c4da3d885bfd3e984d4118d66727c79c42fb58e4d7d63b72a2a3ad14444] <==
	I1007 12:43:58.516197       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1007 12:43:58.551339       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1007 12:43:58.551424       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1007 12:44:16.031135       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1007 12:44:16.031592       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-130031_5bf7acef-2084-4da1-88b2-fb96d19513e2!
	I1007 12:44:16.031409       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"ac1e8457-2a91-467f-9296-67bcc9156816", APIVersion:"v1", ResourceVersion:"820", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-130031_5bf7acef-2084-4da1-88b2-fb96d19513e2 became leader
	I1007 12:44:16.131901       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-130031_5bf7acef-2084-4da1-88b2-fb96d19513e2!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-130031 -n old-k8s-version-130031
helpers_test.go:261: (dbg) Run:  kubectl --context old-k8s-version-130031 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-9975d5f86-vkrtw
helpers_test.go:274: ======> post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context old-k8s-version-130031 describe pod metrics-server-9975d5f86-vkrtw
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context old-k8s-version-130031 describe pod metrics-server-9975d5f86-vkrtw: exit status 1 (96.826902ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-9975d5f86-vkrtw" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context old-k8s-version-130031 describe pod metrics-server-9975d5f86-vkrtw: exit status 1
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (375.90s)

                                                
                                    

Test pass (299/328)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 6.2
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.08
9 TestDownloadOnly/v1.20.0/DeleteAll 0.22
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.14
12 TestDownloadOnly/v1.31.1/json-events 5.43
13 TestDownloadOnly/v1.31.1/preload-exists 0
17 TestDownloadOnly/v1.31.1/LogsDuration 0.09
18 TestDownloadOnly/v1.31.1/DeleteAll 0.22
19 TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds 0.15
21 TestBinaryMirror 0.6
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.09
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.08
27 TestAddons/Setup 215.97
31 TestAddons/serial/GCPAuth/Namespaces 0.17
32 TestAddons/serial/GCPAuth/PullSecret 9.9
34 TestAddons/parallel/Registry 16.87
35 TestAddons/parallel/Ingress 18.65
36 TestAddons/parallel/InspektorGadget 12.08
37 TestAddons/parallel/MetricsServer 5.82
39 TestAddons/parallel/CSI 47.4
40 TestAddons/parallel/Headlamp 16.94
41 TestAddons/parallel/CloudSpanner 6.68
42 TestAddons/parallel/LocalPath 52.93
43 TestAddons/parallel/NvidiaDevicePlugin 6.09
44 TestAddons/parallel/Yakd 11.83
45 TestAddons/StoppedEnableDisable 12.28
46 TestCertOptions 36.41
47 TestCertExpiration 231.49
49 TestForceSystemdFlag 41.57
50 TestForceSystemdEnv 43.33
51 TestDockerEnvContainerd 44.66
56 TestErrorSpam/setup 29.17
57 TestErrorSpam/start 0.77
58 TestErrorSpam/status 1.04
59 TestErrorSpam/pause 1.73
60 TestErrorSpam/unpause 1.86
61 TestErrorSpam/stop 1.5
64 TestFunctional/serial/CopySyncFile 0
65 TestFunctional/serial/StartWithProxy 51.83
66 TestFunctional/serial/AuditLog 0
67 TestFunctional/serial/SoftStart 6.52
68 TestFunctional/serial/KubeContext 0.06
69 TestFunctional/serial/KubectlGetPods 0.1
72 TestFunctional/serial/CacheCmd/cache/add_remote 4.12
73 TestFunctional/serial/CacheCmd/cache/add_local 1.28
74 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.08
75 TestFunctional/serial/CacheCmd/cache/list 0.05
76 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.34
77 TestFunctional/serial/CacheCmd/cache/cache_reload 2.02
78 TestFunctional/serial/CacheCmd/cache/delete 0.12
79 TestFunctional/serial/MinikubeKubectlCmd 0.14
80 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.14
81 TestFunctional/serial/ExtraConfig 47.12
82 TestFunctional/serial/ComponentHealth 0.1
83 TestFunctional/serial/LogsCmd 1.81
84 TestFunctional/serial/LogsFileCmd 1.83
85 TestFunctional/serial/InvalidService 4.64
87 TestFunctional/parallel/ConfigCmd 0.48
88 TestFunctional/parallel/DashboardCmd 10.62
89 TestFunctional/parallel/DryRun 0.55
90 TestFunctional/parallel/InternationalLanguage 0.26
91 TestFunctional/parallel/StatusCmd 1.2
95 TestFunctional/parallel/ServiceCmdConnect 10.61
96 TestFunctional/parallel/AddonsCmd 0.17
97 TestFunctional/parallel/PersistentVolumeClaim 25.8
99 TestFunctional/parallel/SSHCmd 0.75
100 TestFunctional/parallel/CpCmd 2
102 TestFunctional/parallel/FileSync 0.38
103 TestFunctional/parallel/CertSync 1.91
107 TestFunctional/parallel/NodeLabels 0.11
109 TestFunctional/parallel/NonActiveRuntimeDisabled 0.69
111 TestFunctional/parallel/License 0.32
112 TestFunctional/parallel/Version/short 0.08
113 TestFunctional/parallel/Version/components 1.34
114 TestFunctional/parallel/ImageCommands/ImageListShort 0.29
115 TestFunctional/parallel/ImageCommands/ImageListTable 0.31
116 TestFunctional/parallel/ImageCommands/ImageListJson 0.29
117 TestFunctional/parallel/ImageCommands/ImageListYaml 0.29
118 TestFunctional/parallel/ImageCommands/ImageBuild 3.36
119 TestFunctional/parallel/ImageCommands/Setup 0.72
120 TestFunctional/parallel/UpdateContextCmd/no_changes 0.15
121 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.18
122 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.16
123 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.54
125 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.73
126 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
128 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 9.46
129 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1.42
130 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.61
131 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.4
132 TestFunctional/parallel/ImageCommands/ImageRemove 0.5
133 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.69
134 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.4
135 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.08
136 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
140 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
141 TestFunctional/parallel/ServiceCmd/DeployApp 7.25
142 TestFunctional/parallel/ServiceCmd/List 0.5
143 TestFunctional/parallel/ServiceCmd/JSONOutput 0.52
144 TestFunctional/parallel/ServiceCmd/HTTPS 0.37
145 TestFunctional/parallel/ServiceCmd/Format 0.38
146 TestFunctional/parallel/ServiceCmd/URL 0.4
147 TestFunctional/parallel/ProfileCmd/profile_not_create 0.45
148 TestFunctional/parallel/ProfileCmd/profile_list 0.45
149 TestFunctional/parallel/ProfileCmd/profile_json_output 0.64
150 TestFunctional/parallel/MountCmd/any-port 8.7
151 TestFunctional/parallel/MountCmd/specific-port 2.05
152 TestFunctional/parallel/MountCmd/VerifyCleanup 1.64
153 TestFunctional/delete_echo-server_images 0.04
154 TestFunctional/delete_my-image_image 0.02
155 TestFunctional/delete_minikube_cached_images 0.02
159 TestMultiControlPlane/serial/StartCluster 116.16
160 TestMultiControlPlane/serial/DeployApp 32.21
161 TestMultiControlPlane/serial/PingHostFromPods 1.65
162 TestMultiControlPlane/serial/AddWorkerNode 22.85
163 TestMultiControlPlane/serial/NodeLabels 0.11
164 TestMultiControlPlane/serial/HAppyAfterClusterStart 1.03
165 TestMultiControlPlane/serial/CopyFile 19.76
166 TestMultiControlPlane/serial/StopSecondaryNode 12.87
167 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.81
168 TestMultiControlPlane/serial/RestartSecondaryNode 19.15
169 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 1.05
170 TestMultiControlPlane/serial/RestartClusterKeepsNodes 144.7
171 TestMultiControlPlane/serial/DeleteSecondaryNode 11.03
172 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.78
173 TestMultiControlPlane/serial/StopCluster 36.16
174 TestMultiControlPlane/serial/RestartCluster 69.2
175 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.78
176 TestMultiControlPlane/serial/AddSecondaryNode 43.6
177 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 1.01
181 TestJSONOutput/start/Command 49.99
182 TestJSONOutput/start/Audit 0
184 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
185 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
187 TestJSONOutput/pause/Command 0.79
188 TestJSONOutput/pause/Audit 0
190 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
191 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
193 TestJSONOutput/unpause/Command 0.71
194 TestJSONOutput/unpause/Audit 0
196 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
197 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
199 TestJSONOutput/stop/Command 5.82
200 TestJSONOutput/stop/Audit 0
202 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
203 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
204 TestErrorJSONOutput 0.23
206 TestKicCustomNetwork/create_custom_network 38.88
207 TestKicCustomNetwork/use_default_bridge_network 32.07
208 TestKicExistingNetwork 35.86
209 TestKicCustomSubnet 32.68
210 TestKicStaticIP 30.95
211 TestMainNoArgs 0.05
212 TestMinikubeProfile 70.01
215 TestMountStart/serial/StartWithMountFirst 5.98
216 TestMountStart/serial/VerifyMountFirst 0.26
217 TestMountStart/serial/StartWithMountSecond 6.51
218 TestMountStart/serial/VerifyMountSecond 0.27
219 TestMountStart/serial/DeleteFirst 1.64
220 TestMountStart/serial/VerifyMountPostDelete 0.27
221 TestMountStart/serial/Stop 1.22
222 TestMountStart/serial/RestartStopped 7.82
223 TestMountStart/serial/VerifyMountPostStop 0.26
226 TestMultiNode/serial/FreshStart2Nodes 62.53
227 TestMultiNode/serial/DeployApp2Nodes 16.89
228 TestMultiNode/serial/PingHostFrom2Pods 1.02
229 TestMultiNode/serial/AddNode 17.47
230 TestMultiNode/serial/MultiNodeLabels 0.1
231 TestMultiNode/serial/ProfileList 0.69
232 TestMultiNode/serial/CopyFile 10.1
233 TestMultiNode/serial/StopNode 2.26
234 TestMultiNode/serial/StartAfterStop 9.73
235 TestMultiNode/serial/RestartKeepsNodes 92.76
236 TestMultiNode/serial/DeleteNode 5.58
237 TestMultiNode/serial/StopMultiNode 24.11
238 TestMultiNode/serial/RestartMultiNode 55.6
239 TestMultiNode/serial/ValidateNameConflict 32.09
244 TestPreload 127.68
246 TestScheduledStopUnix 104.31
249 TestInsufficientStorage 10.44
250 TestRunningBinaryUpgrade 79.42
252 TestKubernetesUpgrade 348.83
253 TestMissingContainerUpgrade 183.68
255 TestNoKubernetes/serial/StartNoK8sWithVersion 0.11
256 TestNoKubernetes/serial/StartWithK8s 40.08
257 TestNoKubernetes/serial/StartWithStopK8s 20.64
258 TestNoKubernetes/serial/Start 8.73
259 TestNoKubernetes/serial/VerifyK8sNotRunning 0.34
260 TestNoKubernetes/serial/ProfileList 1.22
261 TestNoKubernetes/serial/Stop 1.27
262 TestNoKubernetes/serial/StartNoArgs 7.86
263 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.4
264 TestStoppedBinaryUpgrade/Setup 0.84
265 TestStoppedBinaryUpgrade/Upgrade 78.04
266 TestStoppedBinaryUpgrade/MinikubeLogs 1.19
275 TestPause/serial/Start 82.99
276 TestPause/serial/SecondStartNoReconfiguration 7.3
277 TestPause/serial/Pause 0.87
278 TestPause/serial/VerifyStatus 0.36
279 TestPause/serial/Unpause 1.2
280 TestPause/serial/PauseAgain 1.61
281 TestPause/serial/DeletePaused 2.99
282 TestPause/serial/VerifyDeletedResources 0.55
290 TestNetworkPlugins/group/false 4.8
295 TestStartStop/group/old-k8s-version/serial/FirstStart 118.9
296 TestStartStop/group/old-k8s-version/serial/DeployApp 10.68
297 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.27
298 TestStartStop/group/old-k8s-version/serial/Stop 12.17
299 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.22
302 TestStartStop/group/no-preload/serial/FirstStart 66.98
303 TestStartStop/group/no-preload/serial/DeployApp 9.37
304 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.2
305 TestStartStop/group/no-preload/serial/Stop 12.18
306 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.21
307 TestStartStop/group/no-preload/serial/SecondStart 289.31
308 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6.01
309 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.11
310 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.25
311 TestStartStop/group/old-k8s-version/serial/Pause 3.25
313 TestStartStop/group/embed-certs/serial/FirstStart 51.13
314 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6.01
315 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.14
316 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.3
317 TestStartStop/group/no-preload/serial/Pause 4.08
319 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 53.57
320 TestStartStop/group/embed-certs/serial/DeployApp 9.38
321 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.35
322 TestStartStop/group/embed-certs/serial/Stop 12.25
323 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.2
324 TestStartStop/group/embed-certs/serial/SecondStart 266.56
325 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 9.5
326 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.38
327 TestStartStop/group/default-k8s-diff-port/serial/Stop 12.53
328 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.21
329 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 267.94
330 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6.01
331 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.12
332 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.3
333 TestStartStop/group/embed-certs/serial/Pause 3.12
335 TestStartStop/group/newest-cni/serial/FirstStart 38.27
336 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6.01
337 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.16
338 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.36
339 TestStartStop/group/default-k8s-diff-port/serial/Pause 4.42
340 TestNetworkPlugins/group/auto/Start 58
341 TestStartStop/group/newest-cni/serial/DeployApp 0
342 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.43
343 TestStartStop/group/newest-cni/serial/Stop 1.45
344 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.44
345 TestStartStop/group/newest-cni/serial/SecondStart 23.61
346 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
347 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
348 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.33
349 TestStartStop/group/newest-cni/serial/Pause 3.85
350 TestNetworkPlugins/group/kindnet/Start 65.1
351 TestNetworkPlugins/group/auto/KubeletFlags 0.44
352 TestNetworkPlugins/group/auto/NetCatPod 11.37
353 TestNetworkPlugins/group/auto/DNS 0.25
354 TestNetworkPlugins/group/auto/Localhost 0.19
355 TestNetworkPlugins/group/auto/HairPin 0.25
356 TestNetworkPlugins/group/calico/Start 70.19
357 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
358 TestNetworkPlugins/group/kindnet/KubeletFlags 0.42
359 TestNetworkPlugins/group/kindnet/NetCatPod 10.33
360 TestNetworkPlugins/group/kindnet/DNS 0.25
361 TestNetworkPlugins/group/kindnet/Localhost 0.19
362 TestNetworkPlugins/group/kindnet/HairPin 0.25
363 TestNetworkPlugins/group/custom-flannel/Start 57.02
364 TestNetworkPlugins/group/calico/ControllerPod 6.01
365 TestNetworkPlugins/group/calico/KubeletFlags 0.36
366 TestNetworkPlugins/group/calico/NetCatPod 11.32
367 TestNetworkPlugins/group/calico/DNS 0.25
368 TestNetworkPlugins/group/calico/Localhost 0.22
369 TestNetworkPlugins/group/calico/HairPin 0.22
370 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.36
371 TestNetworkPlugins/group/custom-flannel/NetCatPod 10.34
372 TestNetworkPlugins/group/enable-default-cni/Start 76.29
373 TestNetworkPlugins/group/custom-flannel/DNS 0.23
374 TestNetworkPlugins/group/custom-flannel/Localhost 0.19
375 TestNetworkPlugins/group/custom-flannel/HairPin 0.22
376 TestNetworkPlugins/group/flannel/Start 49.19
377 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.34
378 TestNetworkPlugins/group/enable-default-cni/NetCatPod 10.26
379 TestNetworkPlugins/group/flannel/ControllerPod 6.01
380 TestNetworkPlugins/group/enable-default-cni/DNS 0.17
381 TestNetworkPlugins/group/enable-default-cni/Localhost 0.17
382 TestNetworkPlugins/group/enable-default-cni/HairPin 0.16
383 TestNetworkPlugins/group/flannel/KubeletFlags 0.29
384 TestNetworkPlugins/group/flannel/NetCatPod 10.28
385 TestNetworkPlugins/group/flannel/DNS 0.27
386 TestNetworkPlugins/group/flannel/Localhost 0.19
387 TestNetworkPlugins/group/flannel/HairPin 0.24
388 TestNetworkPlugins/group/bridge/Start 48.34
389 TestNetworkPlugins/group/bridge/KubeletFlags 0.29
390 TestNetworkPlugins/group/bridge/NetCatPod 9.26
391 TestNetworkPlugins/group/bridge/DNS 0.17
392 TestNetworkPlugins/group/bridge/Localhost 0.15
393 TestNetworkPlugins/group/bridge/HairPin 0.14
x
+
TestDownloadOnly/v1.20.0/json-events (6.2s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-431096 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-431096 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (6.202069649s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (6.20s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
I1007 11:53:10.216147 1400308 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime containerd
I1007 11:53:10.216231 1400308 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19763-1394934/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-431096
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-431096: exit status 85 (75.666575ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-431096 | jenkins | v1.34.0 | 07 Oct 24 11:53 UTC |          |
	|         | -p download-only-431096        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/07 11:53:04
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.23.0 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1007 11:53:04.065123 1400313 out.go:345] Setting OutFile to fd 1 ...
	I1007 11:53:04.065301 1400313 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 11:53:04.065313 1400313 out.go:358] Setting ErrFile to fd 2...
	I1007 11:53:04.065318 1400313 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 11:53:04.065584 1400313 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19763-1394934/.minikube/bin
	W1007 11:53:04.065723 1400313 root.go:314] Error reading config file at /home/jenkins/minikube-integration/19763-1394934/.minikube/config/config.json: open /home/jenkins/minikube-integration/19763-1394934/.minikube/config/config.json: no such file or directory
	I1007 11:53:04.066147 1400313 out.go:352] Setting JSON to true
	I1007 11:53:04.067063 1400313 start.go:129] hostinfo: {"hostname":"ip-172-31-29-130","uptime":92135,"bootTime":1728209849,"procs":160,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I1007 11:53:04.067140 1400313 start.go:139] virtualization:  
	I1007 11:53:04.070807 1400313 out.go:97] [download-only-431096] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	W1007 11:53:04.071054 1400313 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/19763-1394934/.minikube/cache/preloaded-tarball: no such file or directory
	I1007 11:53:04.071156 1400313 notify.go:220] Checking for updates...
	I1007 11:53:04.073851 1400313 out.go:169] MINIKUBE_LOCATION=19763
	I1007 11:53:04.076852 1400313 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1007 11:53:04.079712 1400313 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19763-1394934/kubeconfig
	I1007 11:53:04.082571 1400313 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19763-1394934/.minikube
	I1007 11:53:04.085504 1400313 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W1007 11:53:04.091126 1400313 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1007 11:53:04.091407 1400313 driver.go:394] Setting default libvirt URI to qemu:///system
	I1007 11:53:04.119739 1400313 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I1007 11:53:04.119863 1400313 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1007 11:53:04.176647 1400313 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:52 SystemTime:2024-10-07 11:53:04.1670572 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: bridge
-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I1007 11:53:04.176762 1400313 docker.go:318] overlay module found
	I1007 11:53:04.179605 1400313 out.go:97] Using the docker driver based on user configuration
	I1007 11:53:04.179632 1400313 start.go:297] selected driver: docker
	I1007 11:53:04.179639 1400313 start.go:901] validating driver "docker" against <nil>
	I1007 11:53:04.179734 1400313 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1007 11:53:04.233474 1400313 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:52 SystemTime:2024-10-07 11:53:04.223595022 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I1007 11:53:04.233692 1400313 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1007 11:53:04.233978 1400313 start_flags.go:393] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I1007 11:53:04.234136 1400313 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I1007 11:53:04.236976 1400313 out.go:169] Using Docker driver with root privileges
	I1007 11:53:04.239626 1400313 cni.go:84] Creating CNI manager for ""
	I1007 11:53:04.239687 1400313 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1007 11:53:04.239703 1400313 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I1007 11:53:04.239807 1400313 start.go:340] cluster config:
	{Name:download-only-431096 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-431096 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1007 11:53:04.242518 1400313 out.go:97] Starting "download-only-431096" primary control-plane node in "download-only-431096" cluster
	I1007 11:53:04.242541 1400313 cache.go:121] Beginning downloading kic base image for docker with containerd
	I1007 11:53:04.245283 1400313 out.go:97] Pulling base image v0.0.45-1727731891-master ...
	I1007 11:53:04.245319 1400313 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime containerd
	I1007 11:53:04.245430 1400313 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 in local docker daemon
	I1007 11:53:04.261074 1400313 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 to local cache
	I1007 11:53:04.261249 1400313 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 in local cache directory
	I1007 11:53:04.261353 1400313 image.go:148] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 to local cache
	I1007 11:53:04.303131 1400313 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4
	I1007 11:53:04.303155 1400313 cache.go:56] Caching tarball of preloaded images
	I1007 11:53:04.303317 1400313 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime containerd
	I1007 11:53:04.306403 1400313 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I1007 11:53:04.306438 1400313 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4 ...
	I1007 11:53:04.389489 1400313 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4?checksum=md5:7e3d48ccb9f143791669d02e14ce1643 -> /home/jenkins/minikube-integration/19763-1394934/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4
	I1007 11:53:08.527291 1400313 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4 ...
	I1007 11:53:08.527453 1400313 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/19763-1394934/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4 ...
	I1007 11:53:08.871720 1400313 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 as a tarball
	
	
	* The control-plane node download-only-431096 host does not exist
	  To start a cluster, run: "minikube start -p download-only-431096"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.22s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.22s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-431096
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/json-events (5.43s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-149351 --force --alsologtostderr --kubernetes-version=v1.31.1 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-149351 --force --alsologtostderr --kubernetes-version=v1.31.1 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (5.430775837s)
--- PASS: TestDownloadOnly/v1.31.1/json-events (5.43s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/preload-exists
I1007 11:53:16.078278 1400308 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime containerd
I1007 11:53:16.078321 1400308 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19763-1394934/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.31.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/LogsDuration (0.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-149351
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-149351: exit status 85 (92.709432ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-431096 | jenkins | v1.34.0 | 07 Oct 24 11:53 UTC |                     |
	|         | -p download-only-431096        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.34.0 | 07 Oct 24 11:53 UTC | 07 Oct 24 11:53 UTC |
	| delete  | -p download-only-431096        | download-only-431096 | jenkins | v1.34.0 | 07 Oct 24 11:53 UTC | 07 Oct 24 11:53 UTC |
	| start   | -o=json --download-only        | download-only-149351 | jenkins | v1.34.0 | 07 Oct 24 11:53 UTC |                     |
	|         | -p download-only-149351        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1   |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/07 11:53:10
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.23.0 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1007 11:53:10.696881 1400512 out.go:345] Setting OutFile to fd 1 ...
	I1007 11:53:10.697075 1400512 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 11:53:10.697105 1400512 out.go:358] Setting ErrFile to fd 2...
	I1007 11:53:10.697125 1400512 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 11:53:10.697385 1400512 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19763-1394934/.minikube/bin
	I1007 11:53:10.697814 1400512 out.go:352] Setting JSON to true
	I1007 11:53:10.698736 1400512 start.go:129] hostinfo: {"hostname":"ip-172-31-29-130","uptime":92142,"bootTime":1728209849,"procs":157,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I1007 11:53:10.698841 1400512 start.go:139] virtualization:  
	I1007 11:53:10.702180 1400512 out.go:97] [download-only-149351] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	I1007 11:53:10.702488 1400512 notify.go:220] Checking for updates...
	I1007 11:53:10.705611 1400512 out.go:169] MINIKUBE_LOCATION=19763
	I1007 11:53:10.708393 1400512 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1007 11:53:10.710963 1400512 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19763-1394934/kubeconfig
	I1007 11:53:10.713614 1400512 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19763-1394934/.minikube
	I1007 11:53:10.716285 1400512 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W1007 11:53:10.721416 1400512 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1007 11:53:10.721694 1400512 driver.go:394] Setting default libvirt URI to qemu:///system
	I1007 11:53:10.750836 1400512 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I1007 11:53:10.750954 1400512 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1007 11:53:10.802015 1400512 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:45 SystemTime:2024-10-07 11:53:10.792169603 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I1007 11:53:10.802124 1400512 docker.go:318] overlay module found
	I1007 11:53:10.804841 1400512 out.go:97] Using the docker driver based on user configuration
	I1007 11:53:10.804880 1400512 start.go:297] selected driver: docker
	I1007 11:53:10.804888 1400512 start.go:901] validating driver "docker" against <nil>
	I1007 11:53:10.804996 1400512 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1007 11:53:10.853791 1400512 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:45 SystemTime:2024-10-07 11:53:10.844524745 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I1007 11:53:10.854010 1400512 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1007 11:53:10.854356 1400512 start_flags.go:393] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I1007 11:53:10.854510 1400512 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I1007 11:53:10.857312 1400512 out.go:169] Using Docker driver with root privileges
	I1007 11:53:10.859915 1400512 cni.go:84] Creating CNI manager for ""
	I1007 11:53:10.859974 1400512 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1007 11:53:10.859987 1400512 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I1007 11:53:10.860082 1400512 start.go:340] cluster config:
	{Name:download-only-149351 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:download-only-149351 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1007 11:53:10.862811 1400512 out.go:97] Starting "download-only-149351" primary control-plane node in "download-only-149351" cluster
	I1007 11:53:10.862832 1400512 cache.go:121] Beginning downloading kic base image for docker with containerd
	I1007 11:53:10.865414 1400512 out.go:97] Pulling base image v0.0.45-1727731891-master ...
	I1007 11:53:10.865442 1400512 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime containerd
	I1007 11:53:10.865610 1400512 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 in local docker daemon
	I1007 11:53:10.881310 1400512 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 to local cache
	I1007 11:53:10.881445 1400512 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 in local cache directory
	I1007 11:53:10.881468 1400512 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 in local cache directory, skipping pull
	I1007 11:53:10.881473 1400512 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 exists in cache, skipping pull
	I1007 11:53:10.881485 1400512 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 as a tarball
	I1007 11:53:10.926075 1400512 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.1/preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-arm64.tar.lz4
	I1007 11:53:10.926099 1400512 cache.go:56] Caching tarball of preloaded images
	I1007 11:53:10.926260 1400512 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime containerd
	I1007 11:53:10.929161 1400512 out.go:97] Downloading Kubernetes v1.31.1 preload ...
	I1007 11:53:10.929186 1400512 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-arm64.tar.lz4 ...
	I1007 11:53:11.028407 1400512 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.1/preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-arm64.tar.lz4?checksum=md5:b0cdb5ac9449e6e1388c2153988f76f5 -> /home/jenkins/minikube-integration/19763-1394934/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-arm64.tar.lz4
	I1007 11:53:14.579184 1400512 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-arm64.tar.lz4 ...
	I1007 11:53:14.579310 1400512 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/19763-1394934/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-arm64.tar.lz4 ...
	I1007 11:53:15.440521 1400512 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on containerd
	I1007 11:53:15.440922 1400512 profile.go:143] Saving config to /home/jenkins/minikube-integration/19763-1394934/.minikube/profiles/download-only-149351/config.json ...
	I1007 11:53:15.440968 1400512 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19763-1394934/.minikube/profiles/download-only-149351/config.json: {Name:mkc6b84233edf5e77fad2935028ceef6d28e5512 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 11:53:15.441726 1400512 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime containerd
	I1007 11:53:15.441912 1400512 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.1/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/arm64/kubectl.sha256 -> /home/jenkins/minikube-integration/19763-1394934/.minikube/cache/linux/arm64/v1.31.1/kubectl
	
	
	* The control-plane node download-only-149351 host does not exist
	  To start a cluster, run: "minikube start -p download-only-149351"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.1/LogsDuration (0.09s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/DeleteAll (0.22s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.31.1/DeleteAll (0.22s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-149351
--- PASS: TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds (0.15s)

                                                
                                    
x
+
TestBinaryMirror (0.6s)

                                                
                                                
=== RUN   TestBinaryMirror
I1007 11:53:17.409514 1400308 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/arm64/kubectl.sha256
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p binary-mirror-087536 --alsologtostderr --binary-mirror http://127.0.0.1:36621 --driver=docker  --container-runtime=containerd
helpers_test.go:175: Cleaning up "binary-mirror-087536" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p binary-mirror-087536
--- PASS: TestBinaryMirror (0.60s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.09s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:934: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-268164
addons_test.go:934: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable dashboard -p addons-268164: exit status 85 (88.533013ms)

                                                
                                                
-- stdout --
	* Profile "addons-268164" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-268164"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.09s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.08s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:945: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-268164
addons_test.go:945: (dbg) Non-zero exit: out/minikube-linux-arm64 addons disable dashboard -p addons-268164: exit status 85 (83.302828ms)

                                                
                                                
-- stdout --
	* Profile "addons-268164" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-268164"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.08s)

                                                
                                    
x
+
TestAddons/Setup (215.97s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:107: (dbg) Run:  out/minikube-linux-arm64 start -p addons-268164 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=docker  --container-runtime=containerd --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:107: (dbg) Done: out/minikube-linux-arm64 start -p addons-268164 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=docker  --container-runtime=containerd --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (3m35.965320902s)
--- PASS: TestAddons/Setup (215.97s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.17s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:569: (dbg) Run:  kubectl --context addons-268164 create ns new-namespace
addons_test.go:583: (dbg) Run:  kubectl --context addons-268164 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.17s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/PullSecret (9.9s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/PullSecret
addons_test.go:614: (dbg) Run:  kubectl --context addons-268164 create -f testdata/busybox.yaml
addons_test.go:621: (dbg) Run:  kubectl --context addons-268164 create sa gcp-auth-test
addons_test.go:627: (dbg) TestAddons/serial/GCPAuth/PullSecret: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [fb779a31-1897-4bc4-809a-a56d703c6c92] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [fb779a31-1897-4bc4-809a-a56d703c6c92] Running
addons_test.go:627: (dbg) TestAddons/serial/GCPAuth/PullSecret: integration-test=busybox healthy within 9.003548362s
addons_test.go:633: (dbg) Run:  kubectl --context addons-268164 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:645: (dbg) Run:  kubectl --context addons-268164 describe sa gcp-auth-test
addons_test.go:659: (dbg) Run:  kubectl --context addons-268164 exec busybox -- /bin/sh -c "cat /google-app-creds.json"
addons_test.go:683: (dbg) Run:  kubectl --context addons-268164 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/PullSecret (9.90s)

                                                
                                    
x
+
TestAddons/parallel/Registry (16.87s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:321: registry stabilized in 2.974317ms
addons_test.go:323: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-66c9cd494c-c2dt2" [acd6eaf5-969e-4672-988f-259e8dceaa8f] Running
addons_test.go:323: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.004093995s
addons_test.go:326: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-fs9v5" [7ec0748d-37a9-42c3-a336-49194895a61c] Running
addons_test.go:326: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 6.004172755s
addons_test.go:331: (dbg) Run:  kubectl --context addons-268164 delete po -l run=registry-test --now
addons_test.go:336: (dbg) Run:  kubectl --context addons-268164 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:336: (dbg) Done: kubectl --context addons-268164 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (3.83791909s)
addons_test.go:350: (dbg) Run:  out/minikube-linux-arm64 -p addons-268164 ip
2024/10/07 12:01:01 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:979: (dbg) Run:  out/minikube-linux-arm64 -p addons-268164 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (16.87s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (18.65s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:207: (dbg) Run:  kubectl --context addons-268164 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:232: (dbg) Run:  kubectl --context addons-268164 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:245: (dbg) Run:  kubectl --context addons-268164 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [83215f24-f5a1-4138-b679-dda16c5f5ff3] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [83215f24-f5a1-4138-b679-dda16c5f5ff3] Running
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 8.004034814s
I1007 12:02:21.812523 1400308 kapi.go:150] Service nginx in namespace default found.
addons_test.go:262: (dbg) Run:  out/minikube-linux-arm64 -p addons-268164 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:286: (dbg) Run:  kubectl --context addons-268164 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:291: (dbg) Run:  out/minikube-linux-arm64 -p addons-268164 ip
addons_test.go:297: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:979: (dbg) Run:  out/minikube-linux-arm64 -p addons-268164 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:979: (dbg) Done: out/minikube-linux-arm64 -p addons-268164 addons disable ingress-dns --alsologtostderr -v=1: (1.262697972s)
addons_test.go:979: (dbg) Run:  out/minikube-linux-arm64 -p addons-268164 addons disable ingress --alsologtostderr -v=1
addons_test.go:979: (dbg) Done: out/minikube-linux-arm64 -p addons-268164 addons disable ingress --alsologtostderr -v=1: (7.786476236s)
--- PASS: TestAddons/parallel/Ingress (18.65s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (12.08s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:758: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-j46m9" [6ce70ff2-7280-41f0-9f2f-8e04f6760e80] Running
addons_test.go:758: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.004728911s
addons_test.go:979: (dbg) Run:  out/minikube-linux-arm64 -p addons-268164 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:979: (dbg) Done: out/minikube-linux-arm64 -p addons-268164 addons disable inspektor-gadget --alsologtostderr -v=1: (6.070116194s)
--- PASS: TestAddons/parallel/InspektorGadget (12.08s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.82s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:394: metrics-server stabilized in 3.340215ms
addons_test.go:396: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-84c5f94fbc-lt7q4" [17f82f57-6954-417d-a932-533586c9d8e1] Running
addons_test.go:396: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.004162781s
addons_test.go:402: (dbg) Run:  kubectl --context addons-268164 top pods -n kube-system
addons_test.go:979: (dbg) Run:  out/minikube-linux-arm64 -p addons-268164 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.82s)

                                                
                                    
x
+
TestAddons/parallel/CSI (47.4s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I1007 12:01:27.074287 1400308 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I1007 12:01:27.080015 1400308 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1007 12:01:27.080048 1400308 kapi.go:107] duration metric: took 9.242293ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:488: csi-hostpath-driver pods stabilized in 9.255372ms
addons_test.go:491: (dbg) Run:  kubectl --context addons-268164 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:496: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-268164 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-268164 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-268164 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-268164 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-268164 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-268164 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-268164 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-268164 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:501: (dbg) Run:  kubectl --context addons-268164 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:506: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [209a92dd-555c-4eb2-8c26-01ff046a9a55] Pending
helpers_test.go:344: "task-pv-pod" [209a92dd-555c-4eb2-8c26-01ff046a9a55] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [209a92dd-555c-4eb2-8c26-01ff046a9a55] Running
addons_test.go:506: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 10.004621053s
addons_test.go:511: (dbg) Run:  kubectl --context addons-268164 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:516: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-268164 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-268164 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:521: (dbg) Run:  kubectl --context addons-268164 delete pod task-pv-pod
addons_test.go:527: (dbg) Run:  kubectl --context addons-268164 delete pvc hpvc
addons_test.go:533: (dbg) Run:  kubectl --context addons-268164 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:538: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-268164 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-268164 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-268164 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-268164 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-268164 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-268164 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-268164 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-268164 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-268164 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-268164 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-268164 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:543: (dbg) Run:  kubectl --context addons-268164 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:548: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [3c847bd3-1184-4ba8-882b-b84caa6e8445] Pending
helpers_test.go:344: "task-pv-pod-restore" [3c847bd3-1184-4ba8-882b-b84caa6e8445] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [3c847bd3-1184-4ba8-882b-b84caa6e8445] Running
addons_test.go:548: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.003779523s
addons_test.go:553: (dbg) Run:  kubectl --context addons-268164 delete pod task-pv-pod-restore
addons_test.go:557: (dbg) Run:  kubectl --context addons-268164 delete pvc hpvc-restore
addons_test.go:561: (dbg) Run:  kubectl --context addons-268164 delete volumesnapshot new-snapshot-demo
addons_test.go:979: (dbg) Run:  out/minikube-linux-arm64 -p addons-268164 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:979: (dbg) Done: out/minikube-linux-arm64 -p addons-268164 addons disable volumesnapshots --alsologtostderr -v=1: (1.075989046s)
addons_test.go:979: (dbg) Run:  out/minikube-linux-arm64 -p addons-268164 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:979: (dbg) Done: out/minikube-linux-arm64 -p addons-268164 addons disable csi-hostpath-driver --alsologtostderr -v=1: (7.094513581s)
--- PASS: TestAddons/parallel/CSI (47.40s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (16.94s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:743: (dbg) Run:  out/minikube-linux-arm64 addons enable headlamp -p addons-268164 --alsologtostderr -v=1
addons_test.go:743: (dbg) Done: out/minikube-linux-arm64 addons enable headlamp -p addons-268164 --alsologtostderr -v=1: (1.184472441s)
addons_test.go:748: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-7b5c95b59d-z9c5c" [3d7053f5-0380-42bf-8d81-b1bcff73d69e] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-7b5c95b59d-z9c5c" [3d7053f5-0380-42bf-8d81-b1bcff73d69e] Running
addons_test.go:748: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 10.003380505s
addons_test.go:979: (dbg) Run:  out/minikube-linux-arm64 -p addons-268164 addons disable headlamp --alsologtostderr -v=1
addons_test.go:979: (dbg) Done: out/minikube-linux-arm64 -p addons-268164 addons disable headlamp --alsologtostderr -v=1: (5.747706055s)
--- PASS: TestAddons/parallel/Headlamp (16.94s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (6.68s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:775: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-5b584cc74-g8llr" [4f6e4b5e-b09c-415e-b880-450e68d9fb46] Running
addons_test.go:775: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 6.00388305s
addons_test.go:979: (dbg) Run:  out/minikube-linux-arm64 -p addons-268164 addons disable cloud-spanner --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CloudSpanner (6.68s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (52.93s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:883: (dbg) Run:  kubectl --context addons-268164 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:889: (dbg) Run:  kubectl --context addons-268164 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:893: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-268164 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-268164 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-268164 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-268164 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-268164 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-268164 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:896: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [d90f0d12-5176-46aa-8ba1-f471a9f319ac] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [d90f0d12-5176-46aa-8ba1-f471a9f319ac] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [d90f0d12-5176-46aa-8ba1-f471a9f319ac] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:896: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 3.004146224s
addons_test.go:901: (dbg) Run:  kubectl --context addons-268164 get pvc test-pvc -o=json
addons_test.go:910: (dbg) Run:  out/minikube-linux-arm64 -p addons-268164 ssh "cat /opt/local-path-provisioner/pvc-a5cf27de-c269-4d69-aad2-a7326d701c82_default_test-pvc/file1"
addons_test.go:922: (dbg) Run:  kubectl --context addons-268164 delete pod test-local-path
addons_test.go:926: (dbg) Run:  kubectl --context addons-268164 delete pvc test-pvc
addons_test.go:979: (dbg) Run:  out/minikube-linux-arm64 -p addons-268164 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:979: (dbg) Done: out/minikube-linux-arm64 -p addons-268164 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.465278656s)
--- PASS: TestAddons/parallel/LocalPath (52.93s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.09s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:958: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-c95fk" [6b3a7b2c-7ac7-4afb-96d5-5f58856d2ce2] Running
addons_test.go:958: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.005452571s
addons_test.go:961: (dbg) Run:  out/minikube-linux-arm64 addons disable nvidia-device-plugin -p addons-268164
addons_test.go:961: (dbg) Done: out/minikube-linux-arm64 addons disable nvidia-device-plugin -p addons-268164: (1.081341499s)
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.09s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (11.83s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:969: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-67d98fc6b-949jb" [619fa974-382c-46ab-8fee-92563e37fceb] Running
addons_test.go:969: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.004917737s
addons_test.go:973: (dbg) Run:  out/minikube-linux-arm64 -p addons-268164 addons disable yakd --alsologtostderr -v=1
addons_test.go:973: (dbg) Done: out/minikube-linux-arm64 -p addons-268164 addons disable yakd --alsologtostderr -v=1: (5.820446016s)
--- PASS: TestAddons/parallel/Yakd (11.83s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.28s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:170: (dbg) Run:  out/minikube-linux-arm64 stop -p addons-268164
addons_test.go:170: (dbg) Done: out/minikube-linux-arm64 stop -p addons-268164: (11.987184049s)
addons_test.go:174: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-268164
addons_test.go:178: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-268164
addons_test.go:183: (dbg) Run:  out/minikube-linux-arm64 addons disable gvisor -p addons-268164
--- PASS: TestAddons/StoppedEnableDisable (12.28s)

                                                
                                    
x
+
TestCertOptions (36.41s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-arm64 start -p cert-options-034457 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd
cert_options_test.go:49: (dbg) Done: out/minikube-linux-arm64 start -p cert-options-034457 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd: (33.715651736s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-arm64 -p cert-options-034457 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-034457 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-arm64 ssh -p cert-options-034457 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-034457" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-options-034457
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-options-034457: (2.012619005s)
--- PASS: TestCertOptions (36.41s)

                                                
                                    
x
+
TestCertExpiration (231.49s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-914735 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=containerd
cert_options_test.go:123: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-914735 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=containerd: (40.372944468s)
E1007 12:39:57.178487 1400308 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19763-1394934/.minikube/profiles/addons-268164/client.crt: no such file or directory" logger="UnhandledError"
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-914735 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=containerd
cert_options_test.go:131: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-914735 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=containerd: (8.369755262s)
helpers_test.go:175: Cleaning up "cert-expiration-914735" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-expiration-914735
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-expiration-914735: (2.743701993s)
--- PASS: TestCertExpiration (231.49s)

                                                
                                    
x
+
TestForceSystemdFlag (41.57s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-flag-448988 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
docker_test.go:91: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-flag-448988 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (39.00481921s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-flag-448988 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-flag-448988" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-flag-448988
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-flag-448988: (2.178887296s)
--- PASS: TestForceSystemdFlag (41.57s)

                                                
                                    
x
+
TestForceSystemdEnv (43.33s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-env-471819 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
docker_test.go:155: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-env-471819 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (40.267575692s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-env-471819 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-env-471819" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-env-471819
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-env-471819: (2.596130923s)
--- PASS: TestForceSystemdEnv (43.33s)

                                                
                                    
x
+
TestDockerEnvContainerd (44.66s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with containerd true linux arm64
docker_test.go:181: (dbg) Run:  out/minikube-linux-arm64 start -p dockerenv-163252 --driver=docker  --container-runtime=containerd
docker_test.go:181: (dbg) Done: out/minikube-linux-arm64 start -p dockerenv-163252 --driver=docker  --container-runtime=containerd: (29.157283159s)
docker_test.go:189: (dbg) Run:  /bin/bash -c "out/minikube-linux-arm64 docker-env --ssh-host --ssh-add -p dockerenv-163252"
docker_test.go:220: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-QBkPsXo1OpxC/agent.1422633" SSH_AGENT_PID="1422634" DOCKER_HOST=ssh://docker@127.0.0.1:37901 docker version"
docker_test.go:243: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-QBkPsXo1OpxC/agent.1422633" SSH_AGENT_PID="1422634" DOCKER_HOST=ssh://docker@127.0.0.1:37901 DOCKER_BUILDKIT=0 docker build -t local/minikube-dockerenv-containerd-test:latest testdata/docker-env"
docker_test.go:243: (dbg) Done: /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-QBkPsXo1OpxC/agent.1422633" SSH_AGENT_PID="1422634" DOCKER_HOST=ssh://docker@127.0.0.1:37901 DOCKER_BUILDKIT=0 docker build -t local/minikube-dockerenv-containerd-test:latest testdata/docker-env": (1.119320848s)
docker_test.go:250: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-QBkPsXo1OpxC/agent.1422633" SSH_AGENT_PID="1422634" DOCKER_HOST=ssh://docker@127.0.0.1:37901 docker image ls"
helpers_test.go:175: Cleaning up "dockerenv-163252" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p dockerenv-163252
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p dockerenv-163252: (1.979541146s)
--- PASS: TestDockerEnvContainerd (44.66s)

                                                
                                    
x
+
TestErrorSpam/setup (29.17s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -p nospam-697834 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-697834 --driver=docker  --container-runtime=containerd
error_spam_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -p nospam-697834 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-697834 --driver=docker  --container-runtime=containerd: (29.173624923s)
--- PASS: TestErrorSpam/setup (29.17s)

                                                
                                    
x
+
TestErrorSpam/start (0.77s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-697834 --log_dir /tmp/nospam-697834 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-697834 --log_dir /tmp/nospam-697834 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-697834 --log_dir /tmp/nospam-697834 start --dry-run
--- PASS: TestErrorSpam/start (0.77s)

                                                
                                    
x
+
TestErrorSpam/status (1.04s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-697834 --log_dir /tmp/nospam-697834 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-697834 --log_dir /tmp/nospam-697834 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-697834 --log_dir /tmp/nospam-697834 status
--- PASS: TestErrorSpam/status (1.04s)

                                                
                                    
x
+
TestErrorSpam/pause (1.73s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-697834 --log_dir /tmp/nospam-697834 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-697834 --log_dir /tmp/nospam-697834 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-697834 --log_dir /tmp/nospam-697834 pause
--- PASS: TestErrorSpam/pause (1.73s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.86s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-697834 --log_dir /tmp/nospam-697834 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-697834 --log_dir /tmp/nospam-697834 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-697834 --log_dir /tmp/nospam-697834 unpause
--- PASS: TestErrorSpam/unpause (1.86s)

                                                
                                    
x
+
TestErrorSpam/stop (1.5s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-697834 --log_dir /tmp/nospam-697834 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-arm64 -p nospam-697834 --log_dir /tmp/nospam-697834 stop: (1.298139366s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-697834 --log_dir /tmp/nospam-697834 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-697834 --log_dir /tmp/nospam-697834 stop
--- PASS: TestErrorSpam/stop (1.50s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1855: local sync path: /home/jenkins/minikube-integration/19763-1394934/.minikube/files/etc/test/nested/copy/1400308/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (51.83s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2234: (dbg) Run:  out/minikube-linux-arm64 start -p functional-632459 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd
functional_test.go:2234: (dbg) Done: out/minikube-linux-arm64 start -p functional-632459 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd: (51.831192725s)
--- PASS: TestFunctional/serial/StartWithProxy (51.83s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (6.52s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I1007 12:05:07.675449 1400308 config.go:182] Loaded profile config "functional-632459": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
functional_test.go:659: (dbg) Run:  out/minikube-linux-arm64 start -p functional-632459 --alsologtostderr -v=8
functional_test.go:659: (dbg) Done: out/minikube-linux-arm64 start -p functional-632459 --alsologtostderr -v=8: (6.517384312s)
functional_test.go:663: soft start took 6.519701382s for "functional-632459" cluster.
I1007 12:05:14.193387 1400308 config.go:182] Loaded profile config "functional-632459": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
--- PASS: TestFunctional/serial/SoftStart (6.52s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:681: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.06s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:696: (dbg) Run:  kubectl --context functional-632459 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.10s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (4.12s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1049: (dbg) Run:  out/minikube-linux-arm64 -p functional-632459 cache add registry.k8s.io/pause:3.1
functional_test.go:1049: (dbg) Done: out/minikube-linux-arm64 -p functional-632459 cache add registry.k8s.io/pause:3.1: (1.523907134s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-arm64 -p functional-632459 cache add registry.k8s.io/pause:3.3
functional_test.go:1049: (dbg) Done: out/minikube-linux-arm64 -p functional-632459 cache add registry.k8s.io/pause:3.3: (1.435099547s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-arm64 -p functional-632459 cache add registry.k8s.io/pause:latest
functional_test.go:1049: (dbg) Done: out/minikube-linux-arm64 -p functional-632459 cache add registry.k8s.io/pause:latest: (1.164829104s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (4.12s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.28s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1077: (dbg) Run:  docker build -t minikube-local-cache-test:functional-632459 /tmp/TestFunctionalserialCacheCmdcacheadd_local1851735784/001
functional_test.go:1089: (dbg) Run:  out/minikube-linux-arm64 -p functional-632459 cache add minikube-local-cache-test:functional-632459
functional_test.go:1094: (dbg) Run:  out/minikube-linux-arm64 -p functional-632459 cache delete minikube-local-cache-test:functional-632459
functional_test.go:1083: (dbg) Run:  docker rmi minikube-local-cache-test:functional-632459
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.28s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1102: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1110: (dbg) Run:  out/minikube-linux-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.34s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1124: (dbg) Run:  out/minikube-linux-arm64 -p functional-632459 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.34s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (2.02s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1147: (dbg) Run:  out/minikube-linux-arm64 -p functional-632459 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Run:  out/minikube-linux-arm64 -p functional-632459 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-632459 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (312.636417ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1158: (dbg) Run:  out/minikube-linux-arm64 -p functional-632459 cache reload
functional_test.go:1158: (dbg) Done: out/minikube-linux-arm64 -p functional-632459 cache reload: (1.05674994s)
functional_test.go:1163: (dbg) Run:  out/minikube-linux-arm64 -p functional-632459 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (2.02s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1172: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1172: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:716: (dbg) Run:  out/minikube-linux-arm64 -p functional-632459 kubectl -- --context functional-632459 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.14s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:741: (dbg) Run:  out/kubectl --context functional-632459 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.14s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (47.12s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:757: (dbg) Run:  out/minikube-linux-arm64 start -p functional-632459 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:757: (dbg) Done: out/minikube-linux-arm64 start -p functional-632459 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (47.117615586s)
functional_test.go:761: restart took 47.117709828s for "functional-632459" cluster.
I1007 12:06:09.784561 1400308 config.go:182] Loaded profile config "functional-632459": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
--- PASS: TestFunctional/serial/ExtraConfig (47.12s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:810: (dbg) Run:  kubectl --context functional-632459 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:825: etcd phase: Running
functional_test.go:835: etcd status: Ready
functional_test.go:825: kube-apiserver phase: Running
functional_test.go:835: kube-apiserver status: Ready
functional_test.go:825: kube-controller-manager phase: Running
functional_test.go:835: kube-controller-manager status: Ready
functional_test.go:825: kube-scheduler phase: Running
functional_test.go:835: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.10s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.81s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1236: (dbg) Run:  out/minikube-linux-arm64 -p functional-632459 logs
functional_test.go:1236: (dbg) Done: out/minikube-linux-arm64 -p functional-632459 logs: (1.810271964s)
--- PASS: TestFunctional/serial/LogsCmd (1.81s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.83s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1250: (dbg) Run:  out/minikube-linux-arm64 -p functional-632459 logs --file /tmp/TestFunctionalserialLogsFileCmd1848683186/001/logs.txt
functional_test.go:1250: (dbg) Done: out/minikube-linux-arm64 -p functional-632459 logs --file /tmp/TestFunctionalserialLogsFileCmd1848683186/001/logs.txt: (1.823507649s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.83s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.64s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2321: (dbg) Run:  kubectl --context functional-632459 apply -f testdata/invalidsvc.yaml
functional_test.go:2335: (dbg) Run:  out/minikube-linux-arm64 service invalid-svc -p functional-632459
functional_test.go:2335: (dbg) Non-zero exit: out/minikube-linux-arm64 service invalid-svc -p functional-632459: exit status 115 (592.40097ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|---------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL            |
	|-----------|-------------|-------------|---------------------------|
	| default   | invalid-svc |          80 | http://192.168.49.2:32056 |
	|-----------|-------------|-------------|---------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2327: (dbg) Run:  kubectl --context functional-632459 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.64s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-632459 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-632459 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-632459 config get cpus: exit status 14 (80.436722ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-632459 config set cpus 2
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-632459 config get cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-632459 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-632459 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-632459 config get cpus: exit status 14 (68.701135ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (10.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:905: (dbg) daemon: [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-632459 --alsologtostderr -v=1]
E1007 12:06:55.401058 1400308 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19763-1394934/.minikube/profiles/addons-268164/client.crt: no such file or directory" logger="UnhandledError"
E1007 12:06:56.683326 1400308 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19763-1394934/.minikube/profiles/addons-268164/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:910: (dbg) stopping [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-632459 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 1438004: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (10.62s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:974: (dbg) Run:  out/minikube-linux-arm64 start -p functional-632459 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd
functional_test.go:974: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-632459 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd: exit status 23 (235.187746ms)

                                                
                                                
-- stdout --
	* [functional-632459] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19763
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19763-1394934/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19763-1394934/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1007 12:06:54.687466 1437689 out.go:345] Setting OutFile to fd 1 ...
	I1007 12:06:54.687637 1437689 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 12:06:54.687651 1437689 out.go:358] Setting ErrFile to fd 2...
	I1007 12:06:54.687656 1437689 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 12:06:54.687979 1437689 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19763-1394934/.minikube/bin
	I1007 12:06:54.688487 1437689 out.go:352] Setting JSON to false
	I1007 12:06:54.689669 1437689 start.go:129] hostinfo: {"hostname":"ip-172-31-29-130","uptime":92966,"bootTime":1728209849,"procs":210,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I1007 12:06:54.689745 1437689 start.go:139] virtualization:  
	I1007 12:06:54.694655 1437689 out.go:177] * [functional-632459] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	I1007 12:06:54.697440 1437689 out.go:177]   - MINIKUBE_LOCATION=19763
	I1007 12:06:54.697506 1437689 notify.go:220] Checking for updates...
	I1007 12:06:54.702553 1437689 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1007 12:06:54.705170 1437689 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19763-1394934/kubeconfig
	I1007 12:06:54.708040 1437689 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19763-1394934/.minikube
	I1007 12:06:54.710773 1437689 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1007 12:06:54.713489 1437689 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1007 12:06:54.716640 1437689 config.go:182] Loaded profile config "functional-632459": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I1007 12:06:54.717216 1437689 driver.go:394] Setting default libvirt URI to qemu:///system
	I1007 12:06:54.749706 1437689 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I1007 12:06:54.749826 1437689 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1007 12:06:54.827727 1437689 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:32 OomKillDisable:true NGoroutines:52 SystemTime:2024-10-07 12:06:54.817500119 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I1007 12:06:54.827847 1437689 docker.go:318] overlay module found
	I1007 12:06:54.830833 1437689 out.go:177] * Using the docker driver based on existing profile
	I1007 12:06:54.833400 1437689 start.go:297] selected driver: docker
	I1007 12:06:54.833424 1437689 start.go:901] validating driver "docker" against &{Name:functional-632459 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:functional-632459 Namespace:default APIServerHAVIP: APIServerN
ame:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:d
ocker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1007 12:06:54.833535 1437689 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1007 12:06:54.836720 1437689 out.go:201] 
	W1007 12:06:54.839385 1437689 out.go:270] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1007 12:06:54.842085 1437689 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:991: (dbg) Run:  out/minikube-linux-arm64 start -p functional-632459 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
--- PASS: TestFunctional/parallel/DryRun (0.55s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1020: (dbg) Run:  out/minikube-linux-arm64 start -p functional-632459 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd
functional_test.go:1020: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-632459 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd: exit status 23 (259.980562ms)

                                                
                                                
-- stdout --
	* [functional-632459] minikube v1.34.0 sur Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19763
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19763-1394934/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19763-1394934/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1007 12:06:54.447367 1437599 out.go:345] Setting OutFile to fd 1 ...
	I1007 12:06:54.447640 1437599 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 12:06:54.447678 1437599 out.go:358] Setting ErrFile to fd 2...
	I1007 12:06:54.447699 1437599 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 12:06:54.448139 1437599 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19763-1394934/.minikube/bin
	I1007 12:06:54.448653 1437599 out.go:352] Setting JSON to false
	I1007 12:06:54.449840 1437599 start.go:129] hostinfo: {"hostname":"ip-172-31-29-130","uptime":92966,"bootTime":1728209849,"procs":209,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I1007 12:06:54.449977 1437599 start.go:139] virtualization:  
	I1007 12:06:54.453604 1437599 out.go:177] * [functional-632459] minikube v1.34.0 sur Ubuntu 20.04 (arm64)
	I1007 12:06:54.456350 1437599 out.go:177]   - MINIKUBE_LOCATION=19763
	I1007 12:06:54.456432 1437599 notify.go:220] Checking for updates...
	I1007 12:06:54.461751 1437599 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1007 12:06:54.464360 1437599 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19763-1394934/kubeconfig
	I1007 12:06:54.467171 1437599 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19763-1394934/.minikube
	I1007 12:06:54.469694 1437599 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1007 12:06:54.472280 1437599 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1007 12:06:54.476534 1437599 config.go:182] Loaded profile config "functional-632459": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I1007 12:06:54.477054 1437599 driver.go:394] Setting default libvirt URI to qemu:///system
	I1007 12:06:54.497834 1437599 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I1007 12:06:54.497954 1437599 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1007 12:06:54.592024 1437599 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:32 OomKillDisable:true NGoroutines:52 SystemTime:2024-10-07 12:06:54.581745298 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I1007 12:06:54.592135 1437599 docker.go:318] overlay module found
	I1007 12:06:54.594998 1437599 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I1007 12:06:54.597606 1437599 start.go:297] selected driver: docker
	I1007 12:06:54.597635 1437599 start.go:901] validating driver "docker" against &{Name:functional-632459 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:functional-632459 Namespace:default APIServerHAVIP: APIServerN
ame:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:d
ocker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1007 12:06:54.597746 1437599 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1007 12:06:54.600991 1437599 out.go:201] 
	W1007 12:06:54.603866 1437599 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1007 12:06:54.606395 1437599 out.go:201] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:854: (dbg) Run:  out/minikube-linux-arm64 -p functional-632459 status
functional_test.go:860: (dbg) Run:  out/minikube-linux-arm64 -p functional-632459 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:872: (dbg) Run:  out/minikube-linux-arm64 -p functional-632459 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.20s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (10.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1627: (dbg) Run:  kubectl --context functional-632459 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1635: (dbg) Run:  kubectl --context functional-632459 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-65d86f57f4-7hq4w" [d23b06db-ec2a-4b48-b062-94b6cbb907b8] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-connect-65d86f57f4-7hq4w" [d23b06db-ec2a-4b48-b062-94b6cbb907b8] Running
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 10.003763551s
functional_test.go:1649: (dbg) Run:  out/minikube-linux-arm64 -p functional-632459 service hello-node-connect --url
functional_test.go:1655: found endpoint for hello-node-connect: http://192.168.49.2:31668
functional_test.go:1675: http://192.168.49.2:31668: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-65d86f57f4-7hq4w

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.49.2:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.49.2:31668
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (10.61s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1690: (dbg) Run:  out/minikube-linux-arm64 -p functional-632459 addons list
functional_test.go:1702: (dbg) Run:  out/minikube-linux-arm64 -p functional-632459 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (25.8s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [4aa666e8-59f3-48fe-aca8-9c8b22529b2c] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.00485717s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-632459 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-632459 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-632459 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-632459 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [889bdaa6-aca5-403f-b64f-f545c12c97d3] Pending
helpers_test.go:344: "sp-pod" [889bdaa6-aca5-403f-b64f-f545c12c97d3] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [889bdaa6-aca5-403f-b64f-f545c12c97d3] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 11.004454933s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-632459 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-632459 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-632459 delete -f testdata/storage-provisioner/pod.yaml: (1.578655459s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-632459 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [98fffdb7-793a-496b-b65e-d560732a0f21] Pending
helpers_test.go:344: "sp-pod" [98fffdb7-793a-496b-b65e-d560732a0f21] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.00478449s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-632459 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (25.80s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.75s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1725: (dbg) Run:  out/minikube-linux-arm64 -p functional-632459 ssh "echo hello"
functional_test.go:1742: (dbg) Run:  out/minikube-linux-arm64 -p functional-632459 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.75s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (2s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-632459 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-632459 ssh -n functional-632459 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-632459 cp functional-632459:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd3875722162/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-632459 ssh -n functional-632459 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-632459 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-632459 ssh -n functional-632459 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (2.00s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1929: Checking for existence of /etc/test/nested/copy/1400308/hosts within VM
functional_test.go:1931: (dbg) Run:  out/minikube-linux-arm64 -p functional-632459 ssh "sudo cat /etc/test/nested/copy/1400308/hosts"
functional_test.go:1936: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.91s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1972: Checking for existence of /etc/ssl/certs/1400308.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-arm64 -p functional-632459 ssh "sudo cat /etc/ssl/certs/1400308.pem"
functional_test.go:1972: Checking for existence of /usr/share/ca-certificates/1400308.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-arm64 -p functional-632459 ssh "sudo cat /usr/share/ca-certificates/1400308.pem"
2024/10/07 12:07:05 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:1972: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-arm64 -p functional-632459 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/14003082.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-arm64 -p functional-632459 ssh "sudo cat /etc/ssl/certs/14003082.pem"
functional_test.go:1999: Checking for existence of /usr/share/ca-certificates/14003082.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-arm64 -p functional-632459 ssh "sudo cat /usr/share/ca-certificates/14003082.pem"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-arm64 -p functional-632459 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.91s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:219: (dbg) Run:  kubectl --context functional-632459 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2027: (dbg) Run:  out/minikube-linux-arm64 -p functional-632459 ssh "sudo systemctl is-active docker"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-632459 ssh "sudo systemctl is-active docker": exit status 1 (346.224834ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2027: (dbg) Run:  out/minikube-linux-arm64 -p functional-632459 ssh "sudo systemctl is-active crio"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-632459 ssh "sudo systemctl is-active crio": exit status 1 (347.951215ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.69s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2288: (dbg) Run:  out/minikube-linux-arm64 license
--- PASS: TestFunctional/parallel/License (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2256: (dbg) Run:  out/minikube-linux-arm64 -p functional-632459 version --short
--- PASS: TestFunctional/parallel/Version/short (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (1.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2270: (dbg) Run:  out/minikube-linux-arm64 -p functional-632459 version -o=json --components
functional_test.go:2270: (dbg) Done: out/minikube-linux-arm64 -p functional-632459 version -o=json --components: (1.3446884s)
--- PASS: TestFunctional/parallel/Version/components (1.34s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-632459 image ls --format short --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-632459 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.31.1
registry.k8s.io/kube-proxy:v1.31.1
registry.k8s.io/kube-controller-manager:v1.31.1
registry.k8s.io/kube-apiserver:v1.31.1
registry.k8s.io/etcd:3.5.15-0
registry.k8s.io/echoserver-arm:1.8
registry.k8s.io/coredns/coredns:v1.11.3
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/minikube-local-cache-test:functional-632459
docker.io/kindest/kindnetd:v20240813-c6f155d6
docker.io/kicbase/echo-server:functional-632459
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-632459 image ls --format short --alsologtostderr:
I1007 12:07:07.583507 1439869 out.go:345] Setting OutFile to fd 1 ...
I1007 12:07:07.583730 1439869 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1007 12:07:07.583742 1439869 out.go:358] Setting ErrFile to fd 2...
I1007 12:07:07.583747 1439869 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1007 12:07:07.584015 1439869 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19763-1394934/.minikube/bin
I1007 12:07:07.584784 1439869 config.go:182] Loaded profile config "functional-632459": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
I1007 12:07:07.584911 1439869 config.go:182] Loaded profile config "functional-632459": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
I1007 12:07:07.585436 1439869 cli_runner.go:164] Run: docker container inspect functional-632459 --format={{.State.Status}}
I1007 12:07:07.607779 1439869 ssh_runner.go:195] Run: systemctl --version
I1007 12:07:07.607839 1439869 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-632459
I1007 12:07:07.649025 1439869 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:37911 SSHKeyPath:/home/jenkins/minikube-integration/19763-1394934/.minikube/machines/functional-632459/id_rsa Username:docker}
I1007 12:07:07.745904 1439869 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-632459 image ls --format table --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-632459 image ls --format table --alsologtostderr:
|---------------------------------------------|--------------------|---------------|--------|
|                    Image                    |        Tag         |   Image ID    |  Size  |
|---------------------------------------------|--------------------|---------------|--------|
| docker.io/kicbase/echo-server               | functional-632459  | sha256:ce2d2c | 2.17MB |
| registry.k8s.io/echoserver-arm              | 1.8                | sha256:72565b | 45.3MB |
| registry.k8s.io/pause                       | 3.1                | sha256:8057e0 | 262kB  |
| registry.k8s.io/pause                       | 3.10               | sha256:afb617 | 268kB  |
| registry.k8s.io/pause                       | 3.3                | sha256:3d1873 | 249kB  |
| docker.io/library/nginx                     | latest             | sha256:048e09 | 69.6MB |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc       | sha256:1611cd | 1.94MB |
| gcr.io/k8s-minikube/storage-provisioner     | v5                 | sha256:ba04bb | 8.03MB |
| registry.k8s.io/kube-proxy                  | v1.31.1            | sha256:24a140 | 26.8MB |
| registry.k8s.io/kube-scheduler              | v1.31.1            | sha256:7f8aa3 | 18.5MB |
| registry.k8s.io/pause                       | latest             | sha256:8cb209 | 71.3kB |
| docker.io/library/minikube-local-cache-test | functional-632459  | sha256:86cbac | 991B   |
| docker.io/library/nginx                     | alpine             | sha256:577a23 | 21.5MB |
| registry.k8s.io/etcd                        | 3.5.15-0           | sha256:27e383 | 66.5MB |
| registry.k8s.io/kube-apiserver              | v1.31.1            | sha256:d3f53a | 25.7MB |
| docker.io/kindest/kindnetd                  | v20240813-c6f155d6 | sha256:6a23fa | 33.3MB |
| registry.k8s.io/coredns/coredns             | v1.11.3            | sha256:2f6c96 | 16.9MB |
| registry.k8s.io/kube-controller-manager     | v1.31.1            | sha256:279f38 | 23.9MB |
|---------------------------------------------|--------------------|---------------|--------|
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-632459 image ls --format table --alsologtostderr:
I1007 12:07:07.969489 1439959 out.go:345] Setting OutFile to fd 1 ...
I1007 12:07:07.970405 1439959 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1007 12:07:07.970451 1439959 out.go:358] Setting ErrFile to fd 2...
I1007 12:07:07.970474 1439959 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1007 12:07:07.971000 1439959 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19763-1394934/.minikube/bin
I1007 12:07:07.972177 1439959 config.go:182] Loaded profile config "functional-632459": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
I1007 12:07:07.972380 1439959 config.go:182] Loaded profile config "functional-632459": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
I1007 12:07:07.973196 1439959 cli_runner.go:164] Run: docker container inspect functional-632459 --format={{.State.Status}}
I1007 12:07:07.994157 1439959 ssh_runner.go:195] Run: systemctl --version
I1007 12:07:07.994208 1439959 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-632459
I1007 12:07:08.015076 1439959 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:37911 SSHKeyPath:/home/jenkins/minikube-integration/19763-1394934/.minikube/machines/functional-632459/id_rsa Username:docker}
I1007 12:07:08.113760 1439959 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-632459 image ls --format json --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-632459 image ls --format json --alsologtostderr:
[{"id":"sha256:86cbacaa688bf1e74d90bcf402129f53a5d8d2ac6b7df29b785f29a6fe29c5b0","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-632459"],"size":"991"},{"id":"sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"1935750"},{"id":"sha256:8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"262191"},{"id":"sha256:8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"71300"},{"id":"sha256:6a23fa8fd2b78ab58e42ba273808edc936a9c53d8ac4a919f6337be094843a51","repoDigests":["docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166"],"repoTags":["docker.io/kindest/kindnetd:v20240
813-c6f155d6"],"size":"33309097"},{"id":"sha256:20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"],"repoTags":[],"size":"74084559"},{"id":"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4","repoDigests":["registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e"],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.3"],"size":"16948420"},{"id":"sha256:d3f53a98c0a9d9163c4848bcf34b2d2f5e1e3691b79f3d1dd6d0206809e02853","repoDigests":["registry.k8s.io/kube-apiserver@sha256:2409c23dbb5a2b7a81adbb184d3eac43ac653e9b97a7c0ee121b89bb3ef61fdb"],"repoTags":["registry.k8s.io/kube-apiserver:v1.31.1"],"size":"25687130"},{"id":"sha256:24a140c548c075e487e45d0ee73b1aa89f8bfb40c08a57e05975559728822b1d","repoDigests":["registry.k8s.io/kube-proxy@sha256:4ee50b00484d7f39a90fc4cda92251177ef5ad8fdf2f2a0c768f9e634b4c6d44"],"repoTa
gs":["registry.k8s.io/kube-proxy:v1.31.1"],"size":"26756812"},{"id":"sha256:577a23b5858b94a1a92e4263bd5d6da99fbd997fb9839bc0f357c9d4b858f431","repoDigests":["docker.io/library/nginx@sha256:2140dad235c130ac861018a4e13a6bc8aea3a35f3a40e20c1b060d51a7efd250"],"repoTags":["docker.io/library/nginx:alpine"],"size":"21533923"},{"id":"sha256:048e09038596626fc38392bfd1b77ac8d5a0d6d0183b513290307d4451bc44b9","repoDigests":["docker.io/library/nginx@sha256:d2eb56950b84efe34f966a2b92efb1a1a2ea53e7e93b94cdf45a27cf3cd47fc0"],"repoTags":["docker.io/library/nginx:latest"],"size":"69600401"},{"id":"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da","repoDigests":["registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"],"repoTags":["registry.k8s.io/etcd:3.5.15-0"],"size":"66535646"},{"id":"sha256:7f8aa378bb47dffcf430f3a601abe39137e88aee0238e23ed8530fdd18dab82d","repoDigests":["registry.k8s.io/kube-scheduler@sha256:969a7e96340f3a927b3d652582edec2d6d82a083871d81ef5064b7e
daab430d0"],"repoTags":["registry.k8s.io/kube-scheduler:v1.31.1"],"size":"18507674"},{"id":"sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17","repoDigests":[],"repoTags":["docker.io/kicbase/echo-server:functional-632459"],"size":"2173567"},{"id":"sha256:a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"18306114"},{"id":"sha256:279f381cb37365bbbcd133c9531fba9c2beb0f38dbbe6ddfcd0b1b1643d3450e","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:9f9da5b27e03f89599cc40ba89150aebf3b4cff001e6db6d998674b34181e1a1"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.31.1"],"size":"23948670"},{"id":"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8","repoDigests":["registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"],"repoTags":["registry.k8s.io/pause:
3.10"],"size":"267933"},{"id":"sha256:3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"249461"},{"id":"sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"8034419"},{"id":"sha256:72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb","repoDigests":["registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5"],"repoTags":["registry.k8s.io/echoserver-arm:1.8"],"size":"45324675"}]
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-632459 image ls --format json --alsologtostderr:
I1007 12:07:07.664253 1439888 out.go:345] Setting OutFile to fd 1 ...
I1007 12:07:07.664480 1439888 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1007 12:07:07.664508 1439888 out.go:358] Setting ErrFile to fd 2...
I1007 12:07:07.664546 1439888 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1007 12:07:07.664845 1439888 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19763-1394934/.minikube/bin
I1007 12:07:07.665671 1439888 config.go:182] Loaded profile config "functional-632459": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
I1007 12:07:07.665855 1439888 config.go:182] Loaded profile config "functional-632459": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
I1007 12:07:07.666437 1439888 cli_runner.go:164] Run: docker container inspect functional-632459 --format={{.State.Status}}
I1007 12:07:07.685145 1439888 ssh_runner.go:195] Run: systemctl --version
I1007 12:07:07.685200 1439888 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-632459
I1007 12:07:07.704782 1439888 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:37911 SSHKeyPath:/home/jenkins/minikube-integration/19763-1394934/.minikube/machines/functional-632459/id_rsa Username:docker}
I1007 12:07:07.802602 1439888 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-632459 image ls --format yaml --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-632459 image ls --format yaml --alsologtostderr:
- id: sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17
repoDigests: []
repoTags:
- docker.io/kicbase/echo-server:functional-632459
size: "2173567"
- id: sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "8034419"
- id: sha256:72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb
repoDigests:
- registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5
repoTags:
- registry.k8s.io/echoserver-arm:1.8
size: "45324675"
- id: sha256:8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "262191"
- id: sha256:577a23b5858b94a1a92e4263bd5d6da99fbd997fb9839bc0f357c9d4b858f431
repoDigests:
- docker.io/library/nginx@sha256:2140dad235c130ac861018a4e13a6bc8aea3a35f3a40e20c1b060d51a7efd250
repoTags:
- docker.io/library/nginx:alpine
size: "21533923"
- id: sha256:279f381cb37365bbbcd133c9531fba9c2beb0f38dbbe6ddfcd0b1b1643d3450e
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:9f9da5b27e03f89599cc40ba89150aebf3b4cff001e6db6d998674b34181e1a1
repoTags:
- registry.k8s.io/kube-controller-manager:v1.31.1
size: "23948670"
- id: sha256:24a140c548c075e487e45d0ee73b1aa89f8bfb40c08a57e05975559728822b1d
repoDigests:
- registry.k8s.io/kube-proxy@sha256:4ee50b00484d7f39a90fc4cda92251177ef5ad8fdf2f2a0c768f9e634b4c6d44
repoTags:
- registry.k8s.io/kube-proxy:v1.31.1
size: "26756812"
- id: sha256:7f8aa378bb47dffcf430f3a601abe39137e88aee0238e23ed8530fdd18dab82d
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:969a7e96340f3a927b3d652582edec2d6d82a083871d81ef5064b7edaab430d0
repoTags:
- registry.k8s.io/kube-scheduler:v1.31.1
size: "18507674"
- id: sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8
repoDigests:
- registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a
repoTags:
- registry.k8s.io/pause:3.10
size: "267933"
- id: sha256:6a23fa8fd2b78ab58e42ba273808edc936a9c53d8ac4a919f6337be094843a51
repoDigests:
- docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166
repoTags:
- docker.io/kindest/kindnetd:v20240813-c6f155d6
size: "33309097"
- id: sha256:20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
repoTags: []
size: "74084559"
- id: sha256:a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "18306114"
- id: sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "1935750"
- id: sha256:d3f53a98c0a9d9163c4848bcf34b2d2f5e1e3691b79f3d1dd6d0206809e02853
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:2409c23dbb5a2b7a81adbb184d3eac43ac653e9b97a7c0ee121b89bb3ef61fdb
repoTags:
- registry.k8s.io/kube-apiserver:v1.31.1
size: "25687130"
- id: sha256:8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "71300"
- id: sha256:86cbacaa688bf1e74d90bcf402129f53a5d8d2ac6b7df29b785f29a6fe29c5b0
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-632459
size: "991"
- id: sha256:048e09038596626fc38392bfd1b77ac8d5a0d6d0183b513290307d4451bc44b9
repoDigests:
- docker.io/library/nginx@sha256:d2eb56950b84efe34f966a2b92efb1a1a2ea53e7e93b94cdf45a27cf3cd47fc0
repoTags:
- docker.io/library/nginx:latest
size: "69600401"
- id: sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.3
size: "16948420"
- id: sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da
repoDigests:
- registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a
repoTags:
- registry.k8s.io/etcd:3.5.15-0
size: "66535646"
- id: sha256:3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "249461"

                                                
                                                
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-632459 image ls --format yaml --alsologtostderr:
I1007 12:07:07.855442 1439938 out.go:345] Setting OutFile to fd 1 ...
I1007 12:07:07.855610 1439938 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1007 12:07:07.855629 1439938 out.go:358] Setting ErrFile to fd 2...
I1007 12:07:07.855635 1439938 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1007 12:07:07.855975 1439938 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19763-1394934/.minikube/bin
I1007 12:07:07.856850 1439938 config.go:182] Loaded profile config "functional-632459": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
I1007 12:07:07.857013 1439938 config.go:182] Loaded profile config "functional-632459": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
I1007 12:07:07.857599 1439938 cli_runner.go:164] Run: docker container inspect functional-632459 --format={{.State.Status}}
I1007 12:07:07.902405 1439938 ssh_runner.go:195] Run: systemctl --version
I1007 12:07:07.902460 1439938 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-632459
I1007 12:07:07.929647 1439938 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:37911 SSHKeyPath:/home/jenkins/minikube-integration/19763-1394934/.minikube/machines/functional-632459/id_rsa Username:docker}
I1007 12:07:08.024721 1439938 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:308: (dbg) Run:  out/minikube-linux-arm64 -p functional-632459 ssh pgrep buildkitd
functional_test.go:308: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-632459 ssh pgrep buildkitd: exit status 1 (324.187305ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:315: (dbg) Run:  out/minikube-linux-arm64 -p functional-632459 image build -t localhost/my-image:functional-632459 testdata/build --alsologtostderr
functional_test.go:315: (dbg) Done: out/minikube-linux-arm64 -p functional-632459 image build -t localhost/my-image:functional-632459 testdata/build --alsologtostderr: (2.784366188s)
functional_test.go:323: (dbg) Stderr: out/minikube-linux-arm64 -p functional-632459 image build -t localhost/my-image:functional-632459 testdata/build --alsologtostderr:
I1007 12:07:08.460641 1440066 out.go:345] Setting OutFile to fd 1 ...
I1007 12:07:08.461602 1440066 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1007 12:07:08.461647 1440066 out.go:358] Setting ErrFile to fd 2...
I1007 12:07:08.461668 1440066 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1007 12:07:08.462100 1440066 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19763-1394934/.minikube/bin
I1007 12:07:08.463273 1440066 config.go:182] Loaded profile config "functional-632459": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
I1007 12:07:08.465361 1440066 config.go:182] Loaded profile config "functional-632459": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
I1007 12:07:08.466086 1440066 cli_runner.go:164] Run: docker container inspect functional-632459 --format={{.State.Status}}
I1007 12:07:08.483600 1440066 ssh_runner.go:195] Run: systemctl --version
I1007 12:07:08.483660 1440066 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-632459
I1007 12:07:08.499764 1440066 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:37911 SSHKeyPath:/home/jenkins/minikube-integration/19763-1394934/.minikube/machines/functional-632459/id_rsa Username:docker}
I1007 12:07:08.592130 1440066 build_images.go:161] Building image from path: /tmp/build.3209785169.tar
I1007 12:07:08.592214 1440066 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1007 12:07:08.601363 1440066 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.3209785169.tar
I1007 12:07:08.604758 1440066 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.3209785169.tar: stat -c "%s %y" /var/lib/minikube/build/build.3209785169.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.3209785169.tar': No such file or directory
I1007 12:07:08.604797 1440066 ssh_runner.go:362] scp /tmp/build.3209785169.tar --> /var/lib/minikube/build/build.3209785169.tar (3072 bytes)
I1007 12:07:08.630791 1440066 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.3209785169
I1007 12:07:08.640211 1440066 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.3209785169 -xf /var/lib/minikube/build/build.3209785169.tar
I1007 12:07:08.649596 1440066 containerd.go:394] Building image: /var/lib/minikube/build/build.3209785169
I1007 12:07:08.649681 1440066 ssh_runner.go:195] Run: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.3209785169 --local dockerfile=/var/lib/minikube/build/build.3209785169 --output type=image,name=localhost/my-image:functional-632459
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 1.2s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.0s done
#5 DONE 0.1s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0B / 828.50kB 0.2s
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 828.50kB / 828.50kB 0.3s done
#5 extracting sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0.1s done
#5 DONE 0.5s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.4s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.0s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.1s done
#8 exporting manifest sha256:e82c5d2afad31e404a388d524cb8097139392f5da0cc1db7e0aa3ce31503a033
#8 exporting manifest sha256:e82c5d2afad31e404a388d524cb8097139392f5da0cc1db7e0aa3ce31503a033 0.0s done
#8 exporting config sha256:e52bad9c10f6ca1a28431c467170db37c59906673db74a595fae7f77b8326fa1 0.0s done
#8 naming to localhost/my-image:functional-632459 done
#8 DONE 0.1s
I1007 12:07:11.161546 1440066 ssh_runner.go:235] Completed: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.3209785169 --local dockerfile=/var/lib/minikube/build/build.3209785169 --output type=image,name=localhost/my-image:functional-632459: (2.511832631s)
I1007 12:07:11.161641 1440066 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.3209785169
I1007 12:07:11.171298 1440066 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.3209785169.tar
I1007 12:07:11.181567 1440066 build_images.go:217] Built localhost/my-image:functional-632459 from /tmp/build.3209785169.tar
I1007 12:07:11.181596 1440066 build_images.go:133] succeeded building to: functional-632459
I1007 12:07:11.181601 1440066 build_images.go:134] failed building to: 
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-632459 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.36s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.72s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:342: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:347: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-632459
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.72s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2119: (dbg) Run:  out/minikube-linux-arm64 -p functional-632459 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2119: (dbg) Run:  out/minikube-linux-arm64 -p functional-632459 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2119: (dbg) Run:  out/minikube-linux-arm64 -p functional-632459 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:355: (dbg) Run:  out/minikube-linux-arm64 -p functional-632459 image load --daemon kicbase/echo-server:functional-632459 --alsologtostderr
functional_test.go:355: (dbg) Done: out/minikube-linux-arm64 -p functional-632459 image load --daemon kicbase/echo-server:functional-632459 --alsologtostderr: (1.249719343s)
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-632459 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.54s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.73s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-632459 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-632459 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-632459 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-632459 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 1434800: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.73s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-arm64 -p functional-632459 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-632459 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [5ca44ff5-d2ae-4202-af0f-31da64f4505f] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [5ca44ff5-d2ae-4202-af0f-31da64f4505f] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 9.003513904s
I1007 12:06:31.213266 1400308 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.46s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:365: (dbg) Run:  out/minikube-linux-arm64 -p functional-632459 image load --daemon kicbase/echo-server:functional-632459 --alsologtostderr
functional_test.go:365: (dbg) Done: out/minikube-linux-arm64 -p functional-632459 image load --daemon kicbase/echo-server:functional-632459 --alsologtostderr: (1.147441853s)
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-632459 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.42s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:235: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:240: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-632459
functional_test.go:245: (dbg) Run:  out/minikube-linux-arm64 -p functional-632459 image load --daemon kicbase/echo-server:functional-632459 --alsologtostderr
functional_test.go:245: (dbg) Done: out/minikube-linux-arm64 -p functional-632459 image load --daemon kicbase/echo-server:functional-632459 --alsologtostderr: (1.060121334s)
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-632459 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.61s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:380: (dbg) Run:  out/minikube-linux-arm64 -p functional-632459 image save kicbase/echo-server:functional-632459 /home/jenkins/workspace/Docker_Linux_containerd_arm64/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:392: (dbg) Run:  out/minikube-linux-arm64 -p functional-632459 image rm kicbase/echo-server:functional-632459 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-632459 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:409: (dbg) Run:  out/minikube-linux-arm64 -p functional-632459 image load /home/jenkins/workspace/Docker_Linux_containerd_arm64/echo-server-save.tar --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-632459 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.69s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:419: (dbg) Run:  docker rmi kicbase/echo-server:functional-632459
functional_test.go:424: (dbg) Run:  out/minikube-linux-arm64 -p functional-632459 image save --daemon kicbase/echo-server:functional-632459 --alsologtostderr
functional_test.go:432: (dbg) Run:  docker image inspect kicbase/echo-server:functional-632459
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-632459 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.98.225.29 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-arm64 -p functional-632459 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (7.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1437: (dbg) Run:  kubectl --context functional-632459 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1445: (dbg) Run:  kubectl --context functional-632459 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-64b4f8f9ff-xhxm4" [ec665988-1350-4ee9-8def-89c75796fe13] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-64b4f8f9ff-xhxm4" [ec665988-1350-4ee9-8def-89c75796fe13] Running
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 7.00371328s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (7.25s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1459: (dbg) Run:  out/minikube-linux-arm64 -p functional-632459 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1489: (dbg) Run:  out/minikube-linux-arm64 -p functional-632459 service list -o json
functional_test.go:1494: Took "520.764386ms" to run "out/minikube-linux-arm64 -p functional-632459 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1509: (dbg) Run:  out/minikube-linux-arm64 -p functional-632459 service --namespace=default --https --url hello-node
functional_test.go:1522: found endpoint: https://192.168.49.2:32638
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1540: (dbg) Run:  out/minikube-linux-arm64 -p functional-632459 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1559: (dbg) Run:  out/minikube-linux-arm64 -p functional-632459 service hello-node --url
functional_test.go:1565: found endpoint for hello-node: http://192.168.49.2:32638
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1270: (dbg) Run:  out/minikube-linux-arm64 profile lis
functional_test.go:1275: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1310: (dbg) Run:  out/minikube-linux-arm64 profile list
functional_test.go:1315: Took "360.712303ms" to run "out/minikube-linux-arm64 profile list"
functional_test.go:1324: (dbg) Run:  out/minikube-linux-arm64 profile list -l
functional_test.go:1329: Took "86.531691ms" to run "out/minikube-linux-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1361: (dbg) Run:  out/minikube-linux-arm64 profile list -o json
functional_test.go:1366: Took "542.139615ms" to run "out/minikube-linux-arm64 profile list -o json"
functional_test.go:1374: (dbg) Run:  out/minikube-linux-arm64 profile list -o json --light
functional_test.go:1379: Took "101.713763ms" to run "out/minikube-linux-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.64s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (8.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-632459 /tmp/TestFunctionalparallelMountCmdany-port2734497369/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1728302812725620989" to /tmp/TestFunctionalparallelMountCmdany-port2734497369/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1728302812725620989" to /tmp/TestFunctionalparallelMountCmdany-port2734497369/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1728302812725620989" to /tmp/TestFunctionalparallelMountCmdany-port2734497369/001/test-1728302812725620989
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-632459 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-632459 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (463.905667ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1007 12:06:53.190446 1400308 retry.go:31] will retry after 466.03722ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-632459 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-arm64 -p functional-632459 ssh -- ls -la /mount-9p
E1007 12:06:54.111795 1400308 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19763-1394934/.minikube/profiles/addons-268164/client.crt: no such file or directory" logger="UnhandledError"
E1007 12:06:54.118471 1400308 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19763-1394934/.minikube/profiles/addons-268164/client.crt: no such file or directory" logger="UnhandledError"
E1007 12:06:54.130225 1400308 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19763-1394934/.minikube/profiles/addons-268164/client.crt: no such file or directory" logger="UnhandledError"
E1007 12:06:54.151713 1400308 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19763-1394934/.minikube/profiles/addons-268164/client.crt: no such file or directory" logger="UnhandledError"
E1007 12:06:54.193138 1400308 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19763-1394934/.minikube/profiles/addons-268164/client.crt: no such file or directory" logger="UnhandledError"
E1007 12:06:54.274586 1400308 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19763-1394934/.minikube/profiles/addons-268164/client.crt: no such file or directory" logger="UnhandledError"
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Oct  7 12:06 created-by-test
-rw-r--r-- 1 docker docker 24 Oct  7 12:06 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Oct  7 12:06 test-1728302812725620989
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-arm64 -p functional-632459 ssh cat /mount-9p/test-1728302812725620989
E1007 12:06:54.436733 1400308 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19763-1394934/.minikube/profiles/addons-268164/client.crt: no such file or directory" logger="UnhandledError"
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-632459 replace --force -f testdata/busybox-mount-test.yaml
E1007 12:06:54.758770 1400308 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19763-1394934/.minikube/profiles/addons-268164/client.crt: no such file or directory" logger="UnhandledError"
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [78360fb8-1a9b-4042-ad3c-db61d36d5627] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [78360fb8-1a9b-4042-ad3c-db61d36d5627] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
E1007 12:06:59.244766 1400308 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19763-1394934/.minikube/profiles/addons-268164/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "busybox-mount" [78360fb8-1a9b-4042-ad3c-db61d36d5627] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 5.059317462s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-632459 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-632459 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-632459 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-arm64 -p functional-632459 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-632459 /tmp/TestFunctionalparallelMountCmdany-port2734497369/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (8.70s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-632459 /tmp/TestFunctionalparallelMountCmdspecific-port2064429664/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-632459 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-632459 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (605.093482ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1007 12:07:02.032832 1400308 retry.go:31] will retry after 282.303455ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-632459 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-arm64 -p functional-632459 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-632459 /tmp/TestFunctionalparallelMountCmdspecific-port2064429664/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-arm64 -p functional-632459 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-632459 ssh "sudo umount -f /mount-9p": exit status 1 (341.139761ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-arm64 -p functional-632459 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-632459 /tmp/TestFunctionalparallelMountCmdspecific-port2064429664/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.05s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-632459 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1684454778/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-632459 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1684454778/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-632459 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1684454778/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-632459 ssh "findmnt -T" /mount1
E1007 12:07:04.366363 1400308 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19763-1394934/.minikube/profiles/addons-268164/client.crt: no such file or directory" logger="UnhandledError"
functional_test_mount_test.go:325: (dbg) Done: out/minikube-linux-arm64 -p functional-632459 ssh "findmnt -T" /mount1: (1.000758687s)
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-632459 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-632459 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-arm64 mount -p functional-632459 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-632459 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1684454778/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-632459 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1684454778/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-632459 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1684454778/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.64s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-632459
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:198: (dbg) Run:  docker rmi -f localhost/my-image:functional-632459
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:206: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-632459
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (116.16s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-arm64 start -p ha-840042 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=containerd
E1007 12:07:14.608246 1400308 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19763-1394934/.minikube/profiles/addons-268164/client.crt: no such file or directory" logger="UnhandledError"
E1007 12:07:35.089778 1400308 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19763-1394934/.minikube/profiles/addons-268164/client.crt: no such file or directory" logger="UnhandledError"
E1007 12:08:16.051116 1400308 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19763-1394934/.minikube/profiles/addons-268164/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:101: (dbg) Done: out/minikube-linux-arm64 start -p ha-840042 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=containerd: (1m55.308756415s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-arm64 -p ha-840042 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (116.16s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (32.21s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-840042 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-840042 -- rollout status deployment/busybox
E1007 12:09:37.973117 1400308 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19763-1394934/.minikube/profiles/addons-268164/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:133: (dbg) Done: out/minikube-linux-arm64 kubectl -p ha-840042 -- rollout status deployment/busybox: (29.114190119s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-840042 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-840042 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-840042 -- exec busybox-7dff88458-8ddzs -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-840042 -- exec busybox-7dff88458-pvwgz -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-840042 -- exec busybox-7dff88458-sdxkj -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-840042 -- exec busybox-7dff88458-8ddzs -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-840042 -- exec busybox-7dff88458-pvwgz -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-840042 -- exec busybox-7dff88458-sdxkj -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-840042 -- exec busybox-7dff88458-8ddzs -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-840042 -- exec busybox-7dff88458-pvwgz -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-840042 -- exec busybox-7dff88458-sdxkj -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (32.21s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.65s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-840042 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-840042 -- exec busybox-7dff88458-8ddzs -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-840042 -- exec busybox-7dff88458-8ddzs -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-840042 -- exec busybox-7dff88458-pvwgz -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-840042 -- exec busybox-7dff88458-pvwgz -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-840042 -- exec busybox-7dff88458-sdxkj -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-840042 -- exec busybox-7dff88458-sdxkj -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.65s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (22.85s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-arm64 node add -p ha-840042 -v=7 --alsologtostderr
ha_test.go:228: (dbg) Done: out/minikube-linux-arm64 node add -p ha-840042 -v=7 --alsologtostderr: (21.757298853s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-arm64 -p ha-840042 status -v=7 --alsologtostderr
ha_test.go:234: (dbg) Done: out/minikube-linux-arm64 -p ha-840042 status -v=7 --alsologtostderr: (1.096124043s)
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (22.85s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.11s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-840042 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.11s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (1.03s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.030103398s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (1.03s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (19.76s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-arm64 -p ha-840042 status --output json -v=7 --alsologtostderr
ha_test.go:328: (dbg) Done: out/minikube-linux-arm64 -p ha-840042 status --output json -v=7 --alsologtostderr: (1.051702966s)
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-840042 cp testdata/cp-test.txt ha-840042:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-840042 ssh -n ha-840042 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-840042 cp ha-840042:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile634229029/001/cp-test_ha-840042.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-840042 ssh -n ha-840042 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-840042 cp ha-840042:/home/docker/cp-test.txt ha-840042-m02:/home/docker/cp-test_ha-840042_ha-840042-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-840042 ssh -n ha-840042 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-840042 ssh -n ha-840042-m02 "sudo cat /home/docker/cp-test_ha-840042_ha-840042-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-840042 cp ha-840042:/home/docker/cp-test.txt ha-840042-m03:/home/docker/cp-test_ha-840042_ha-840042-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-840042 ssh -n ha-840042 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-840042 ssh -n ha-840042-m03 "sudo cat /home/docker/cp-test_ha-840042_ha-840042-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-840042 cp ha-840042:/home/docker/cp-test.txt ha-840042-m04:/home/docker/cp-test_ha-840042_ha-840042-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-840042 ssh -n ha-840042 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-840042 ssh -n ha-840042-m04 "sudo cat /home/docker/cp-test_ha-840042_ha-840042-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-840042 cp testdata/cp-test.txt ha-840042-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-840042 ssh -n ha-840042-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-840042 cp ha-840042-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile634229029/001/cp-test_ha-840042-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-840042 ssh -n ha-840042-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-840042 cp ha-840042-m02:/home/docker/cp-test.txt ha-840042:/home/docker/cp-test_ha-840042-m02_ha-840042.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-840042 ssh -n ha-840042-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-840042 ssh -n ha-840042 "sudo cat /home/docker/cp-test_ha-840042-m02_ha-840042.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-840042 cp ha-840042-m02:/home/docker/cp-test.txt ha-840042-m03:/home/docker/cp-test_ha-840042-m02_ha-840042-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-840042 ssh -n ha-840042-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-840042 ssh -n ha-840042-m03 "sudo cat /home/docker/cp-test_ha-840042-m02_ha-840042-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-840042 cp ha-840042-m02:/home/docker/cp-test.txt ha-840042-m04:/home/docker/cp-test_ha-840042-m02_ha-840042-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-840042 ssh -n ha-840042-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-840042 ssh -n ha-840042-m04 "sudo cat /home/docker/cp-test_ha-840042-m02_ha-840042-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-840042 cp testdata/cp-test.txt ha-840042-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-840042 ssh -n ha-840042-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-840042 cp ha-840042-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile634229029/001/cp-test_ha-840042-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-840042 ssh -n ha-840042-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-840042 cp ha-840042-m03:/home/docker/cp-test.txt ha-840042:/home/docker/cp-test_ha-840042-m03_ha-840042.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-840042 ssh -n ha-840042-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-840042 ssh -n ha-840042 "sudo cat /home/docker/cp-test_ha-840042-m03_ha-840042.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-840042 cp ha-840042-m03:/home/docker/cp-test.txt ha-840042-m02:/home/docker/cp-test_ha-840042-m03_ha-840042-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-840042 ssh -n ha-840042-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-840042 ssh -n ha-840042-m02 "sudo cat /home/docker/cp-test_ha-840042-m03_ha-840042-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-840042 cp ha-840042-m03:/home/docker/cp-test.txt ha-840042-m04:/home/docker/cp-test_ha-840042-m03_ha-840042-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-840042 ssh -n ha-840042-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-840042 ssh -n ha-840042-m04 "sudo cat /home/docker/cp-test_ha-840042-m03_ha-840042-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-840042 cp testdata/cp-test.txt ha-840042-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-840042 ssh -n ha-840042-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-840042 cp ha-840042-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile634229029/001/cp-test_ha-840042-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-840042 ssh -n ha-840042-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-840042 cp ha-840042-m04:/home/docker/cp-test.txt ha-840042:/home/docker/cp-test_ha-840042-m04_ha-840042.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-840042 ssh -n ha-840042-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-840042 ssh -n ha-840042 "sudo cat /home/docker/cp-test_ha-840042-m04_ha-840042.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-840042 cp ha-840042-m04:/home/docker/cp-test.txt ha-840042-m02:/home/docker/cp-test_ha-840042-m04_ha-840042-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-840042 ssh -n ha-840042-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-840042 ssh -n ha-840042-m02 "sudo cat /home/docker/cp-test_ha-840042-m04_ha-840042-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-840042 cp ha-840042-m04:/home/docker/cp-test.txt ha-840042-m03:/home/docker/cp-test_ha-840042-m04_ha-840042-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-840042 ssh -n ha-840042-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-840042 ssh -n ha-840042-m03 "sudo cat /home/docker/cp-test_ha-840042-m04_ha-840042-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (19.76s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (12.87s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-arm64 -p ha-840042 node stop m02 -v=7 --alsologtostderr
ha_test.go:365: (dbg) Done: out/minikube-linux-arm64 -p ha-840042 node stop m02 -v=7 --alsologtostderr: (12.130046647s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-arm64 -p ha-840042 status -v=7 --alsologtostderr
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-840042 status -v=7 --alsologtostderr: exit status 7 (737.425474ms)

                                                
                                                
-- stdout --
	ha-840042
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-840042-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-840042-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-840042-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1007 12:10:40.195964 1456177 out.go:345] Setting OutFile to fd 1 ...
	I1007 12:10:40.196309 1456177 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 12:10:40.196347 1456177 out.go:358] Setting ErrFile to fd 2...
	I1007 12:10:40.196369 1456177 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 12:10:40.196739 1456177 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19763-1394934/.minikube/bin
	I1007 12:10:40.197275 1456177 out.go:352] Setting JSON to false
	I1007 12:10:40.197344 1456177 mustload.go:65] Loading cluster: ha-840042
	I1007 12:10:40.197443 1456177 notify.go:220] Checking for updates...
	I1007 12:10:40.198593 1456177 config.go:182] Loaded profile config "ha-840042": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I1007 12:10:40.198661 1456177 status.go:174] checking status of ha-840042 ...
	I1007 12:10:40.199513 1456177 cli_runner.go:164] Run: docker container inspect ha-840042 --format={{.State.Status}}
	I1007 12:10:40.220728 1456177 status.go:371] ha-840042 host status = "Running" (err=<nil>)
	I1007 12:10:40.220769 1456177 host.go:66] Checking if "ha-840042" exists ...
	I1007 12:10:40.221079 1456177 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-840042
	I1007 12:10:40.248524 1456177 host.go:66] Checking if "ha-840042" exists ...
	I1007 12:10:40.248816 1456177 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1007 12:10:40.248878 1456177 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-840042
	I1007 12:10:40.266586 1456177 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:37916 SSHKeyPath:/home/jenkins/minikube-integration/19763-1394934/.minikube/machines/ha-840042/id_rsa Username:docker}
	I1007 12:10:40.361353 1456177 ssh_runner.go:195] Run: systemctl --version
	I1007 12:10:40.365870 1456177 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1007 12:10:40.378943 1456177 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1007 12:10:40.440840 1456177 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:55 OomKillDisable:true NGoroutines:71 SystemTime:2024-10-07 12:10:40.430063175 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I1007 12:10:40.441480 1456177 kubeconfig.go:125] found "ha-840042" server: "https://192.168.49.254:8443"
	I1007 12:10:40.441528 1456177 api_server.go:166] Checking apiserver status ...
	I1007 12:10:40.441586 1456177 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 12:10:40.454088 1456177 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1466/cgroup
	I1007 12:10:40.463733 1456177 api_server.go:182] apiserver freezer: "5:freezer:/docker/7353d31d38fd521108f536ed76c31fb47f890189ad557805d9b69efbfa6058f2/kubepods/burstable/pod7ff04de4da92b7de40865a797a484101/587bf71e2661243f3dfeac1d2bbc0d138f38d8f8af38ee49567d89225d6bb18e"
	I1007 12:10:40.463804 1456177 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/7353d31d38fd521108f536ed76c31fb47f890189ad557805d9b69efbfa6058f2/kubepods/burstable/pod7ff04de4da92b7de40865a797a484101/587bf71e2661243f3dfeac1d2bbc0d138f38d8f8af38ee49567d89225d6bb18e/freezer.state
	I1007 12:10:40.473423 1456177 api_server.go:204] freezer state: "THAWED"
	I1007 12:10:40.473453 1456177 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1007 12:10:40.481510 1456177 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1007 12:10:40.481541 1456177 status.go:463] ha-840042 apiserver status = Running (err=<nil>)
	I1007 12:10:40.481552 1456177 status.go:176] ha-840042 status: &{Name:ha-840042 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1007 12:10:40.481569 1456177 status.go:174] checking status of ha-840042-m02 ...
	I1007 12:10:40.481866 1456177 cli_runner.go:164] Run: docker container inspect ha-840042-m02 --format={{.State.Status}}
	I1007 12:10:40.509588 1456177 status.go:371] ha-840042-m02 host status = "Stopped" (err=<nil>)
	I1007 12:10:40.509617 1456177 status.go:384] host is not running, skipping remaining checks
	I1007 12:10:40.509625 1456177 status.go:176] ha-840042-m02 status: &{Name:ha-840042-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1007 12:10:40.509647 1456177 status.go:174] checking status of ha-840042-m03 ...
	I1007 12:10:40.509955 1456177 cli_runner.go:164] Run: docker container inspect ha-840042-m03 --format={{.State.Status}}
	I1007 12:10:40.526471 1456177 status.go:371] ha-840042-m03 host status = "Running" (err=<nil>)
	I1007 12:10:40.526498 1456177 host.go:66] Checking if "ha-840042-m03" exists ...
	I1007 12:10:40.526792 1456177 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-840042-m03
	I1007 12:10:40.543864 1456177 host.go:66] Checking if "ha-840042-m03" exists ...
	I1007 12:10:40.544174 1456177 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1007 12:10:40.544230 1456177 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-840042-m03
	I1007 12:10:40.561589 1456177 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:37926 SSHKeyPath:/home/jenkins/minikube-integration/19763-1394934/.minikube/machines/ha-840042-m03/id_rsa Username:docker}
	I1007 12:10:40.653127 1456177 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1007 12:10:40.666590 1456177 kubeconfig.go:125] found "ha-840042" server: "https://192.168.49.254:8443"
	I1007 12:10:40.666625 1456177 api_server.go:166] Checking apiserver status ...
	I1007 12:10:40.666669 1456177 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 12:10:40.678452 1456177 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1375/cgroup
	I1007 12:10:40.688434 1456177 api_server.go:182] apiserver freezer: "5:freezer:/docker/272419a809392b070de43659ad7eac8a5dba4a2adfb21766ac719d1fd8d3fc3d/kubepods/burstable/pode6fe3473f5948f7e7834b0a8c2271a88/865bb82d8cd61ccc0083cd61e990ac23b21c6e3f1324ed1184d0a173f695b056"
	I1007 12:10:40.688565 1456177 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/272419a809392b070de43659ad7eac8a5dba4a2adfb21766ac719d1fd8d3fc3d/kubepods/burstable/pode6fe3473f5948f7e7834b0a8c2271a88/865bb82d8cd61ccc0083cd61e990ac23b21c6e3f1324ed1184d0a173f695b056/freezer.state
	I1007 12:10:40.697641 1456177 api_server.go:204] freezer state: "THAWED"
	I1007 12:10:40.697670 1456177 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1007 12:10:40.705603 1456177 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1007 12:10:40.705632 1456177 status.go:463] ha-840042-m03 apiserver status = Running (err=<nil>)
	I1007 12:10:40.705641 1456177 status.go:176] ha-840042-m03 status: &{Name:ha-840042-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1007 12:10:40.705689 1456177 status.go:174] checking status of ha-840042-m04 ...
	I1007 12:10:40.706019 1456177 cli_runner.go:164] Run: docker container inspect ha-840042-m04 --format={{.State.Status}}
	I1007 12:10:40.722978 1456177 status.go:371] ha-840042-m04 host status = "Running" (err=<nil>)
	I1007 12:10:40.723006 1456177 host.go:66] Checking if "ha-840042-m04" exists ...
	I1007 12:10:40.723303 1456177 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-840042-m04
	I1007 12:10:40.740443 1456177 host.go:66] Checking if "ha-840042-m04" exists ...
	I1007 12:10:40.740773 1456177 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1007 12:10:40.740828 1456177 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-840042-m04
	I1007 12:10:40.767904 1456177 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:37931 SSHKeyPath:/home/jenkins/minikube-integration/19763-1394934/.minikube/machines/ha-840042-m04/id_rsa Username:docker}
	I1007 12:10:40.862077 1456177 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1007 12:10:40.877391 1456177 status.go:176] ha-840042-m04 status: &{Name:ha-840042-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (12.87s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.81s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.81s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (19.15s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p ha-840042 node start m02 -v=7 --alsologtostderr
ha_test.go:422: (dbg) Done: out/minikube-linux-arm64 -p ha-840042 node start m02 -v=7 --alsologtostderr: (17.807180836s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-arm64 -p ha-840042 status -v=7 --alsologtostderr
ha_test.go:430: (dbg) Done: out/minikube-linux-arm64 -p ha-840042 status -v=7 --alsologtostderr: (1.208995604s)
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (19.15s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.05s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.053878878s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.05s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (144.7s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-arm64 node list -p ha-840042 -v=7 --alsologtostderr
ha_test.go:464: (dbg) Run:  out/minikube-linux-arm64 stop -p ha-840042 -v=7 --alsologtostderr
E1007 12:11:21.756056 1400308 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19763-1394934/.minikube/profiles/functional-632459/client.crt: no such file or directory" logger="UnhandledError"
E1007 12:11:21.762755 1400308 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19763-1394934/.minikube/profiles/functional-632459/client.crt: no such file or directory" logger="UnhandledError"
E1007 12:11:21.774066 1400308 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19763-1394934/.minikube/profiles/functional-632459/client.crt: no such file or directory" logger="UnhandledError"
E1007 12:11:21.795423 1400308 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19763-1394934/.minikube/profiles/functional-632459/client.crt: no such file or directory" logger="UnhandledError"
E1007 12:11:21.836809 1400308 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19763-1394934/.minikube/profiles/functional-632459/client.crt: no such file or directory" logger="UnhandledError"
E1007 12:11:21.918199 1400308 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19763-1394934/.minikube/profiles/functional-632459/client.crt: no such file or directory" logger="UnhandledError"
E1007 12:11:22.079632 1400308 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19763-1394934/.minikube/profiles/functional-632459/client.crt: no such file or directory" logger="UnhandledError"
E1007 12:11:22.401022 1400308 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19763-1394934/.minikube/profiles/functional-632459/client.crt: no such file or directory" logger="UnhandledError"
E1007 12:11:23.043009 1400308 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19763-1394934/.minikube/profiles/functional-632459/client.crt: no such file or directory" logger="UnhandledError"
E1007 12:11:24.324731 1400308 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19763-1394934/.minikube/profiles/functional-632459/client.crt: no such file or directory" logger="UnhandledError"
E1007 12:11:26.886033 1400308 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19763-1394934/.minikube/profiles/functional-632459/client.crt: no such file or directory" logger="UnhandledError"
E1007 12:11:32.007324 1400308 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19763-1394934/.minikube/profiles/functional-632459/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:464: (dbg) Done: out/minikube-linux-arm64 stop -p ha-840042 -v=7 --alsologtostderr: (37.588979614s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-arm64 start -p ha-840042 --wait=true -v=7 --alsologtostderr
E1007 12:11:42.248908 1400308 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19763-1394934/.minikube/profiles/functional-632459/client.crt: no such file or directory" logger="UnhandledError"
E1007 12:11:54.109739 1400308 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19763-1394934/.minikube/profiles/addons-268164/client.crt: no such file or directory" logger="UnhandledError"
E1007 12:12:02.730368 1400308 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19763-1394934/.minikube/profiles/functional-632459/client.crt: no such file or directory" logger="UnhandledError"
E1007 12:12:21.814751 1400308 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19763-1394934/.minikube/profiles/addons-268164/client.crt: no such file or directory" logger="UnhandledError"
E1007 12:12:43.692136 1400308 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19763-1394934/.minikube/profiles/functional-632459/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:469: (dbg) Done: out/minikube-linux-arm64 start -p ha-840042 --wait=true -v=7 --alsologtostderr: (1m46.943096119s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-arm64 node list -p ha-840042
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (144.70s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (11.03s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-arm64 -p ha-840042 node delete m03 -v=7 --alsologtostderr
ha_test.go:489: (dbg) Done: out/minikube-linux-arm64 -p ha-840042 node delete m03 -v=7 --alsologtostderr: (10.041712493s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-arm64 -p ha-840042 status -v=7 --alsologtostderr
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (11.03s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.78s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.78s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (36.16s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-arm64 -p ha-840042 stop -v=7 --alsologtostderr
E1007 12:14:05.614913 1400308 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19763-1394934/.minikube/profiles/functional-632459/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:533: (dbg) Done: out/minikube-linux-arm64 -p ha-840042 stop -v=7 --alsologtostderr: (36.041634006s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-arm64 -p ha-840042 status -v=7 --alsologtostderr
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-840042 status -v=7 --alsologtostderr: exit status 7 (122.873966ms)

                                                
                                                
-- stdout --
	ha-840042
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-840042-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-840042-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1007 12:14:14.500725 1470480 out.go:345] Setting OutFile to fd 1 ...
	I1007 12:14:14.500922 1470480 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 12:14:14.500949 1470480 out.go:358] Setting ErrFile to fd 2...
	I1007 12:14:14.500970 1470480 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 12:14:14.501245 1470480 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19763-1394934/.minikube/bin
	I1007 12:14:14.501472 1470480 out.go:352] Setting JSON to false
	I1007 12:14:14.501535 1470480 mustload.go:65] Loading cluster: ha-840042
	I1007 12:14:14.501587 1470480 notify.go:220] Checking for updates...
	I1007 12:14:14.502023 1470480 config.go:182] Loaded profile config "ha-840042": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I1007 12:14:14.502064 1470480 status.go:174] checking status of ha-840042 ...
	I1007 12:14:14.502637 1470480 cli_runner.go:164] Run: docker container inspect ha-840042 --format={{.State.Status}}
	I1007 12:14:14.521249 1470480 status.go:371] ha-840042 host status = "Stopped" (err=<nil>)
	I1007 12:14:14.521270 1470480 status.go:384] host is not running, skipping remaining checks
	I1007 12:14:14.521277 1470480 status.go:176] ha-840042 status: &{Name:ha-840042 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1007 12:14:14.521300 1470480 status.go:174] checking status of ha-840042-m02 ...
	I1007 12:14:14.521623 1470480 cli_runner.go:164] Run: docker container inspect ha-840042-m02 --format={{.State.Status}}
	I1007 12:14:14.550665 1470480 status.go:371] ha-840042-m02 host status = "Stopped" (err=<nil>)
	I1007 12:14:14.550689 1470480 status.go:384] host is not running, skipping remaining checks
	I1007 12:14:14.550695 1470480 status.go:176] ha-840042-m02 status: &{Name:ha-840042-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1007 12:14:14.550712 1470480 status.go:174] checking status of ha-840042-m04 ...
	I1007 12:14:14.550991 1470480 cli_runner.go:164] Run: docker container inspect ha-840042-m04 --format={{.State.Status}}
	I1007 12:14:14.567052 1470480 status.go:371] ha-840042-m04 host status = "Stopped" (err=<nil>)
	I1007 12:14:14.567072 1470480 status.go:384] host is not running, skipping remaining checks
	I1007 12:14:14.567079 1470480 status.go:176] ha-840042-m04 status: &{Name:ha-840042-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (36.16s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (69.2s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-arm64 start -p ha-840042 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=containerd
ha_test.go:562: (dbg) Done: out/minikube-linux-arm64 start -p ha-840042 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=containerd: (1m8.244702307s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-arm64 -p ha-840042 status -v=7 --alsologtostderr
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (69.20s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.78s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.78s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (43.6s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-arm64 node add -p ha-840042 --control-plane -v=7 --alsologtostderr
ha_test.go:607: (dbg) Done: out/minikube-linux-arm64 node add -p ha-840042 --control-plane -v=7 --alsologtostderr: (42.556852234s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-arm64 -p ha-840042 status -v=7 --alsologtostderr
ha_test.go:613: (dbg) Done: out/minikube-linux-arm64 -p ha-840042 status -v=7 --alsologtostderr: (1.043220672s)
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (43.60s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (1.01s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.012299429s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (1.01s)

                                                
                                    
x
+
TestJSONOutput/start/Command (49.99s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-169993 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=containerd
E1007 12:16:21.754015 1400308 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19763-1394934/.minikube/profiles/functional-632459/client.crt: no such file or directory" logger="UnhandledError"
E1007 12:16:49.456572 1400308 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19763-1394934/.minikube/profiles/functional-632459/client.crt: no such file or directory" logger="UnhandledError"
E1007 12:16:54.109401 1400308 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19763-1394934/.minikube/profiles/addons-268164/client.crt: no such file or directory" logger="UnhandledError"
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 start -p json-output-169993 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=containerd: (49.986764923s)
--- PASS: TestJSONOutput/start/Command (49.99s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.79s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 pause -p json-output-169993 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.79s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.71s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 unpause -p json-output-169993 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.71s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.82s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 stop -p json-output-169993 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 stop -p json-output-169993 --output=json --user=testUser: (5.816656161s)
--- PASS: TestJSONOutput/stop/Command (5.82s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.23s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-error-411947 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p json-output-error-411947 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (88.610357ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"5a0f19f4-aa98-41d9-b803-a3c5067c62a9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-411947] minikube v1.34.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"91ab3ee8-7d53-4296-947a-54446905dbee","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19763"}}
	{"specversion":"1.0","id":"eac13eb3-d5ba-47f5-a2af-726cd29d3663","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"9e03edb0-1e93-4819-9a9c-536f05ef753b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19763-1394934/kubeconfig"}}
	{"specversion":"1.0","id":"975d3004-d4d7-4026-ae2b-cc24ffeed7f2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19763-1394934/.minikube"}}
	{"specversion":"1.0","id":"866d33cd-abd3-4637-b5d3-1087bf0ad74b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"cd251ed9-ade3-4eb0-b8f8-de2b5e93382a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"43d9b0dd-6b82-4170-8d63-2d402ffb6647","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-411947" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p json-output-error-411947
--- PASS: TestErrorJSONOutput (0.23s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (38.88s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-133392 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-133392 --network=: (36.816103337s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-133392" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-133392
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-133392: (2.046632836s)
--- PASS: TestKicCustomNetwork/create_custom_network (38.88s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (32.07s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-880218 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-880218 --network=bridge: (29.994259659s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-880218" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-880218
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-880218: (2.049983463s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (32.07s)

                                                
                                    
x
+
TestKicExistingNetwork (35.86s)

                                                
                                                
=== RUN   TestKicExistingNetwork
I1007 12:18:29.843440 1400308 cli_runner.go:164] Run: docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W1007 12:18:29.860092 1400308 cli_runner.go:211] docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I1007 12:18:29.860169 1400308 network_create.go:284] running [docker network inspect existing-network] to gather additional debugging logs...
I1007 12:18:29.860187 1400308 cli_runner.go:164] Run: docker network inspect existing-network
W1007 12:18:29.878903 1400308 cli_runner.go:211] docker network inspect existing-network returned with exit code 1
I1007 12:18:29.878936 1400308 network_create.go:287] error running [docker network inspect existing-network]: docker network inspect existing-network: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error response from daemon: network existing-network not found
I1007 12:18:29.878953 1400308 network_create.go:289] output of [docker network inspect existing-network]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error response from daemon: network existing-network not found

                                                
                                                
** /stderr **
I1007 12:18:29.879146 1400308 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1007 12:18:29.896883 1400308 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-804378e3f480 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:4b:f2:e9:5b} reservation:<nil>}
I1007 12:18:29.897255 1400308 network.go:206] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001e5d1c0}
I1007 12:18:29.897277 1400308 network_create.go:124] attempt to create docker network existing-network 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
I1007 12:18:29.897334 1400308 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network
I1007 12:18:29.970893 1400308 network_create.go:108] docker network existing-network 192.168.58.0/24 created
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-arm64 start -p existing-network-918747 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-arm64 start -p existing-network-918747 --network=existing-network: (33.684694631s)
helpers_test.go:175: Cleaning up "existing-network-918747" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p existing-network-918747
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p existing-network-918747: (2.009640805s)
I1007 12:19:05.681212 1400308 cli_runner.go:164] Run: docker network ls --filter=label=existing-network --format {{.Name}}
--- PASS: TestKicExistingNetwork (35.86s)

                                                
                                    
x
+
TestKicCustomSubnet (32.68s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-subnet-873770 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-subnet-873770 --subnet=192.168.60.0/24: (30.498340481s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-873770 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-873770" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p custom-subnet-873770
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p custom-subnet-873770: (2.149712932s)
--- PASS: TestKicCustomSubnet (32.68s)

                                                
                                    
x
+
TestKicStaticIP (30.95s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-arm64 start -p static-ip-589735 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-arm64 start -p static-ip-589735 --static-ip=192.168.200.200: (28.687916711s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-arm64 -p static-ip-589735 ip
helpers_test.go:175: Cleaning up "static-ip-589735" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p static-ip-589735
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p static-ip-589735: (2.09114626s)
--- PASS: TestKicStaticIP (30.95s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-arm64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (70.01s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p first-122032 --driver=docker  --container-runtime=containerd
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p first-122032 --driver=docker  --container-runtime=containerd: (31.926714314s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p second-125240 --driver=docker  --container-runtime=containerd
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p second-125240 --driver=docker  --container-runtime=containerd: (32.634173035s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile first-122032
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile second-125240
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
helpers_test.go:175: Cleaning up "second-125240" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p second-125240
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p second-125240: (2.083285267s)
helpers_test.go:175: Cleaning up "first-122032" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p first-122032
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p first-122032: (1.977117191s)
--- PASS: TestMinikubeProfile (70.01s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (5.98s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-1-892209 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd
E1007 12:21:21.754310 1400308 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19763-1394934/.minikube/profiles/functional-632459/client.crt: no such file or directory" logger="UnhandledError"
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-1-892209 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd: (4.977913503s)
--- PASS: TestMountStart/serial/StartWithMountFirst (5.98s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-1-892209 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.26s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (6.51s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-894162 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-894162 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd: (5.509705962s)
--- PASS: TestMountStart/serial/StartWithMountSecond (6.51s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-894162 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.27s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.64s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p mount-start-1-892209 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p mount-start-1-892209 --alsologtostderr -v=5: (1.637301505s)
--- PASS: TestMountStart/serial/DeleteFirst (1.64s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-894162 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.27s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.22s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-arm64 stop -p mount-start-2-894162
mount_start_test.go:155: (dbg) Done: out/minikube-linux-arm64 stop -p mount-start-2-894162: (1.219235922s)
--- PASS: TestMountStart/serial/Stop (1.22s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (7.82s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-894162
mount_start_test.go:166: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-894162: (6.824198186s)
--- PASS: TestMountStart/serial/RestartStopped (7.82s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-894162 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.26s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (62.53s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-978861 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd
E1007 12:21:54.109777 1400308 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19763-1394934/.minikube/profiles/addons-268164/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:96: (dbg) Done: out/minikube-linux-arm64 start -p multinode-978861 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd: (1m2.007339897s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-arm64 -p multinode-978861 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (62.53s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (16.89s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-978861 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-978861 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-arm64 kubectl -p multinode-978861 -- rollout status deployment/busybox: (14.873225412s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-978861 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-978861 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-978861 -- exec busybox-7dff88458-5m9jh -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-978861 -- exec busybox-7dff88458-httbt -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-978861 -- exec busybox-7dff88458-5m9jh -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-978861 -- exec busybox-7dff88458-httbt -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-978861 -- exec busybox-7dff88458-5m9jh -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-978861 -- exec busybox-7dff88458-httbt -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (16.89s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (1.02s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-978861 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-978861 -- exec busybox-7dff88458-5m9jh -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-978861 -- exec busybox-7dff88458-5m9jh -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-978861 -- exec busybox-7dff88458-httbt -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-978861 -- exec busybox-7dff88458-httbt -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (1.02s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (17.47s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-978861 -v 3 --alsologtostderr
E1007 12:23:17.176469 1400308 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19763-1394934/.minikube/profiles/addons-268164/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:121: (dbg) Done: out/minikube-linux-arm64 node add -p multinode-978861 -v 3 --alsologtostderr: (16.800769779s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-arm64 -p multinode-978861 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (17.47s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-978861 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.10s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.69s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.69s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (10.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-arm64 -p multinode-978861 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-978861 cp testdata/cp-test.txt multinode-978861:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-978861 ssh -n multinode-978861 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-978861 cp multinode-978861:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2150620569/001/cp-test_multinode-978861.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-978861 ssh -n multinode-978861 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-978861 cp multinode-978861:/home/docker/cp-test.txt multinode-978861-m02:/home/docker/cp-test_multinode-978861_multinode-978861-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-978861 ssh -n multinode-978861 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-978861 ssh -n multinode-978861-m02 "sudo cat /home/docker/cp-test_multinode-978861_multinode-978861-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-978861 cp multinode-978861:/home/docker/cp-test.txt multinode-978861-m03:/home/docker/cp-test_multinode-978861_multinode-978861-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-978861 ssh -n multinode-978861 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-978861 ssh -n multinode-978861-m03 "sudo cat /home/docker/cp-test_multinode-978861_multinode-978861-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-978861 cp testdata/cp-test.txt multinode-978861-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-978861 ssh -n multinode-978861-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-978861 cp multinode-978861-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2150620569/001/cp-test_multinode-978861-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-978861 ssh -n multinode-978861-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-978861 cp multinode-978861-m02:/home/docker/cp-test.txt multinode-978861:/home/docker/cp-test_multinode-978861-m02_multinode-978861.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-978861 ssh -n multinode-978861-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-978861 ssh -n multinode-978861 "sudo cat /home/docker/cp-test_multinode-978861-m02_multinode-978861.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-978861 cp multinode-978861-m02:/home/docker/cp-test.txt multinode-978861-m03:/home/docker/cp-test_multinode-978861-m02_multinode-978861-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-978861 ssh -n multinode-978861-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-978861 ssh -n multinode-978861-m03 "sudo cat /home/docker/cp-test_multinode-978861-m02_multinode-978861-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-978861 cp testdata/cp-test.txt multinode-978861-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-978861 ssh -n multinode-978861-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-978861 cp multinode-978861-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2150620569/001/cp-test_multinode-978861-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-978861 ssh -n multinode-978861-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-978861 cp multinode-978861-m03:/home/docker/cp-test.txt multinode-978861:/home/docker/cp-test_multinode-978861-m03_multinode-978861.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-978861 ssh -n multinode-978861-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-978861 ssh -n multinode-978861 "sudo cat /home/docker/cp-test_multinode-978861-m03_multinode-978861.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-978861 cp multinode-978861-m03:/home/docker/cp-test.txt multinode-978861-m02:/home/docker/cp-test_multinode-978861-m03_multinode-978861-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-978861 ssh -n multinode-978861-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-978861 ssh -n multinode-978861-m02 "sudo cat /home/docker/cp-test_multinode-978861-m03_multinode-978861-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (10.10s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.26s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-arm64 -p multinode-978861 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-arm64 -p multinode-978861 node stop m03: (1.228510363s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-arm64 -p multinode-978861 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-978861 status: exit status 7 (516.936367ms)

                                                
                                                
-- stdout --
	multinode-978861
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-978861-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-978861-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p multinode-978861 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-978861 status --alsologtostderr: exit status 7 (515.82721ms)

                                                
                                                
-- stdout --
	multinode-978861
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-978861-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-978861-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1007 12:23:36.071807 1523665 out.go:345] Setting OutFile to fd 1 ...
	I1007 12:23:36.071986 1523665 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 12:23:36.072000 1523665 out.go:358] Setting ErrFile to fd 2...
	I1007 12:23:36.072005 1523665 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 12:23:36.072313 1523665 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19763-1394934/.minikube/bin
	I1007 12:23:36.072540 1523665 out.go:352] Setting JSON to false
	I1007 12:23:36.072586 1523665 mustload.go:65] Loading cluster: multinode-978861
	I1007 12:23:36.072686 1523665 notify.go:220] Checking for updates...
	I1007 12:23:36.073073 1523665 config.go:182] Loaded profile config "multinode-978861": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I1007 12:23:36.073090 1523665 status.go:174] checking status of multinode-978861 ...
	I1007 12:23:36.073697 1523665 cli_runner.go:164] Run: docker container inspect multinode-978861 --format={{.State.Status}}
	I1007 12:23:36.094916 1523665 status.go:371] multinode-978861 host status = "Running" (err=<nil>)
	I1007 12:23:36.094942 1523665 host.go:66] Checking if "multinode-978861" exists ...
	I1007 12:23:36.095254 1523665 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-978861
	I1007 12:23:36.122865 1523665 host.go:66] Checking if "multinode-978861" exists ...
	I1007 12:23:36.123250 1523665 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1007 12:23:36.123313 1523665 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-978861
	I1007 12:23:36.145267 1523665 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38036 SSHKeyPath:/home/jenkins/minikube-integration/19763-1394934/.minikube/machines/multinode-978861/id_rsa Username:docker}
	I1007 12:23:36.244840 1523665 ssh_runner.go:195] Run: systemctl --version
	I1007 12:23:36.249359 1523665 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1007 12:23:36.261015 1523665 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1007 12:23:36.317895 1523665 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:41 OomKillDisable:true NGoroutines:61 SystemTime:2024-10-07 12:23:36.30802365 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: bridg
e-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I1007 12:23:36.318495 1523665 kubeconfig.go:125] found "multinode-978861" server: "https://192.168.67.2:8443"
	I1007 12:23:36.318544 1523665 api_server.go:166] Checking apiserver status ...
	I1007 12:23:36.318591 1523665 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 12:23:36.329798 1523665 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1441/cgroup
	I1007 12:23:36.339255 1523665 api_server.go:182] apiserver freezer: "5:freezer:/docker/bd54c36a19cfc256583e9e116743a8c0a5bae36b1c57fb21b831c9a443d04207/kubepods/burstable/podb6140ebe82a4666e1ce0cc05306bcfe5/51ba8587562279a9190508c92e530957a00f0898f3760140d5a60ca802741f75"
	I1007 12:23:36.339331 1523665 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/bd54c36a19cfc256583e9e116743a8c0a5bae36b1c57fb21b831c9a443d04207/kubepods/burstable/podb6140ebe82a4666e1ce0cc05306bcfe5/51ba8587562279a9190508c92e530957a00f0898f3760140d5a60ca802741f75/freezer.state
	I1007 12:23:36.348043 1523665 api_server.go:204] freezer state: "THAWED"
	I1007 12:23:36.348077 1523665 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I1007 12:23:36.355920 1523665 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I1007 12:23:36.355949 1523665 status.go:463] multinode-978861 apiserver status = Running (err=<nil>)
	I1007 12:23:36.355960 1523665 status.go:176] multinode-978861 status: &{Name:multinode-978861 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1007 12:23:36.355977 1523665 status.go:174] checking status of multinode-978861-m02 ...
	I1007 12:23:36.356267 1523665 cli_runner.go:164] Run: docker container inspect multinode-978861-m02 --format={{.State.Status}}
	I1007 12:23:36.372816 1523665 status.go:371] multinode-978861-m02 host status = "Running" (err=<nil>)
	I1007 12:23:36.372842 1523665 host.go:66] Checking if "multinode-978861-m02" exists ...
	I1007 12:23:36.373133 1523665 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-978861-m02
	I1007 12:23:36.390120 1523665 host.go:66] Checking if "multinode-978861-m02" exists ...
	I1007 12:23:36.390429 1523665 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1007 12:23:36.390473 1523665 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-978861-m02
	I1007 12:23:36.407397 1523665 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38041 SSHKeyPath:/home/jenkins/minikube-integration/19763-1394934/.minikube/machines/multinode-978861-m02/id_rsa Username:docker}
	I1007 12:23:36.500588 1523665 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1007 12:23:36.513326 1523665 status.go:176] multinode-978861-m02 status: &{Name:multinode-978861-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1007 12:23:36.513361 1523665 status.go:174] checking status of multinode-978861-m03 ...
	I1007 12:23:36.513656 1523665 cli_runner.go:164] Run: docker container inspect multinode-978861-m03 --format={{.State.Status}}
	I1007 12:23:36.532558 1523665 status.go:371] multinode-978861-m03 host status = "Stopped" (err=<nil>)
	I1007 12:23:36.532584 1523665 status.go:384] host is not running, skipping remaining checks
	I1007 12:23:36.532591 1523665 status.go:176] multinode-978861-m03 status: &{Name:multinode-978861-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.26s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (9.73s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-arm64 -p multinode-978861 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-arm64 -p multinode-978861 node start m03 -v=7 --alsologtostderr: (8.929834076s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-arm64 -p multinode-978861 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (9.73s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (92.76s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-978861
multinode_test.go:321: (dbg) Run:  out/minikube-linux-arm64 stop -p multinode-978861
multinode_test.go:321: (dbg) Done: out/minikube-linux-arm64 stop -p multinode-978861: (25.059465957s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-978861 --wait=true -v=8 --alsologtostderr
multinode_test.go:326: (dbg) Done: out/minikube-linux-arm64 start -p multinode-978861 --wait=true -v=8 --alsologtostderr: (1m7.574430819s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-978861
--- PASS: TestMultiNode/serial/RestartKeepsNodes (92.76s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.58s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-arm64 -p multinode-978861 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-arm64 -p multinode-978861 node delete m03: (4.885355334s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p multinode-978861 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.58s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (24.11s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-arm64 -p multinode-978861 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-arm64 -p multinode-978861 stop: (23.911093163s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-arm64 -p multinode-978861 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-978861 status: exit status 7 (96.599948ms)

                                                
                                                
-- stdout --
	multinode-978861
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-978861-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-arm64 -p multinode-978861 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-978861 status --alsologtostderr: exit status 7 (97.626794ms)

                                                
                                                
-- stdout --
	multinode-978861
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-978861-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1007 12:25:48.665290 1532111 out.go:345] Setting OutFile to fd 1 ...
	I1007 12:25:48.665426 1532111 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 12:25:48.665436 1532111 out.go:358] Setting ErrFile to fd 2...
	I1007 12:25:48.665442 1532111 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 12:25:48.665713 1532111 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19763-1394934/.minikube/bin
	I1007 12:25:48.665900 1532111 out.go:352] Setting JSON to false
	I1007 12:25:48.665926 1532111 mustload.go:65] Loading cluster: multinode-978861
	I1007 12:25:48.666325 1532111 config.go:182] Loaded profile config "multinode-978861": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I1007 12:25:48.666345 1532111 status.go:174] checking status of multinode-978861 ...
	I1007 12:25:48.666871 1532111 cli_runner.go:164] Run: docker container inspect multinode-978861 --format={{.State.Status}}
	I1007 12:25:48.667191 1532111 notify.go:220] Checking for updates...
	I1007 12:25:48.685406 1532111 status.go:371] multinode-978861 host status = "Stopped" (err=<nil>)
	I1007 12:25:48.685426 1532111 status.go:384] host is not running, skipping remaining checks
	I1007 12:25:48.685433 1532111 status.go:176] multinode-978861 status: &{Name:multinode-978861 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1007 12:25:48.685457 1532111 status.go:174] checking status of multinode-978861-m02 ...
	I1007 12:25:48.685753 1532111 cli_runner.go:164] Run: docker container inspect multinode-978861-m02 --format={{.State.Status}}
	I1007 12:25:48.707826 1532111 status.go:371] multinode-978861-m02 host status = "Stopped" (err=<nil>)
	I1007 12:25:48.707892 1532111 status.go:384] host is not running, skipping remaining checks
	I1007 12:25:48.707913 1532111 status.go:176] multinode-978861-m02 status: &{Name:multinode-978861-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (24.11s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (55.6s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-978861 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd
E1007 12:26:21.754415 1400308 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19763-1394934/.minikube/profiles/functional-632459/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:376: (dbg) Done: out/minikube-linux-arm64 start -p multinode-978861 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd: (54.913371571s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-arm64 -p multinode-978861 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (55.60s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (32.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-978861
multinode_test.go:464: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-978861-m02 --driver=docker  --container-runtime=containerd
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p multinode-978861-m02 --driver=docker  --container-runtime=containerd: exit status 14 (95.44125ms)

                                                
                                                
-- stdout --
	* [multinode-978861-m02] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19763
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19763-1394934/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19763-1394934/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-978861-m02' is duplicated with machine name 'multinode-978861-m02' in profile 'multinode-978861'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-978861-m03 --driver=docker  --container-runtime=containerd
E1007 12:26:54.110351 1400308 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19763-1394934/.minikube/profiles/addons-268164/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:472: (dbg) Done: out/minikube-linux-arm64 start -p multinode-978861-m03 --driver=docker  --container-runtime=containerd: (29.589794124s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-978861
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-arm64 node add -p multinode-978861: exit status 80 (347.08371ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-978861 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-978861-m03 already exists in multinode-978861-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_4.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-arm64 delete -p multinode-978861-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-arm64 delete -p multinode-978861-m03: (1.977339406s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (32.09s)

                                                
                                    
x
+
TestPreload (127.68s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-650796 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.24.4
E1007 12:27:44.818799 1400308 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19763-1394934/.minikube/profiles/functional-632459/client.crt: no such file or directory" logger="UnhandledError"
preload_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-650796 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.24.4: (1m23.370373769s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-650796 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-arm64 -p test-preload-650796 image pull gcr.io/k8s-minikube/busybox: (2.052073508s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-arm64 stop -p test-preload-650796
preload_test.go:58: (dbg) Done: out/minikube-linux-arm64 stop -p test-preload-650796: (12.142035612s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-650796 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=containerd
preload_test.go:66: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-650796 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=containerd: (27.374680687s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-650796 image list
helpers_test.go:175: Cleaning up "test-preload-650796" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p test-preload-650796
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p test-preload-650796: (2.451425929s)
--- PASS: TestPreload (127.68s)

                                                
                                    
x
+
TestScheduledStopUnix (104.31s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-arm64 start -p scheduled-stop-709157 --memory=2048 --driver=docker  --container-runtime=containerd
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-arm64 start -p scheduled-stop-709157 --memory=2048 --driver=docker  --container-runtime=containerd: (28.166337829s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-709157 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-arm64 status --format={{.TimeToStop}} -p scheduled-stop-709157 -n scheduled-stop-709157
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-709157 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
I1007 12:29:56.862030 1400308 retry.go:31] will retry after 114.932µs: open /home/jenkins/minikube-integration/19763-1394934/.minikube/profiles/scheduled-stop-709157/pid: no such file or directory
I1007 12:29:56.867652 1400308 retry.go:31] will retry after 159.695µs: open /home/jenkins/minikube-integration/19763-1394934/.minikube/profiles/scheduled-stop-709157/pid: no such file or directory
I1007 12:29:56.868802 1400308 retry.go:31] will retry after 189.098µs: open /home/jenkins/minikube-integration/19763-1394934/.minikube/profiles/scheduled-stop-709157/pid: no such file or directory
I1007 12:29:56.869952 1400308 retry.go:31] will retry after 199.68µs: open /home/jenkins/minikube-integration/19763-1394934/.minikube/profiles/scheduled-stop-709157/pid: no such file or directory
I1007 12:29:56.871084 1400308 retry.go:31] will retry after 708.368µs: open /home/jenkins/minikube-integration/19763-1394934/.minikube/profiles/scheduled-stop-709157/pid: no such file or directory
I1007 12:29:56.872208 1400308 retry.go:31] will retry after 784.608µs: open /home/jenkins/minikube-integration/19763-1394934/.minikube/profiles/scheduled-stop-709157/pid: no such file or directory
I1007 12:29:56.873283 1400308 retry.go:31] will retry after 1.545342ms: open /home/jenkins/minikube-integration/19763-1394934/.minikube/profiles/scheduled-stop-709157/pid: no such file or directory
I1007 12:29:56.875582 1400308 retry.go:31] will retry after 2.116718ms: open /home/jenkins/minikube-integration/19763-1394934/.minikube/profiles/scheduled-stop-709157/pid: no such file or directory
I1007 12:29:56.878819 1400308 retry.go:31] will retry after 1.673188ms: open /home/jenkins/minikube-integration/19763-1394934/.minikube/profiles/scheduled-stop-709157/pid: no such file or directory
I1007 12:29:56.881043 1400308 retry.go:31] will retry after 3.563151ms: open /home/jenkins/minikube-integration/19763-1394934/.minikube/profiles/scheduled-stop-709157/pid: no such file or directory
I1007 12:29:56.885295 1400308 retry.go:31] will retry after 7.139823ms: open /home/jenkins/minikube-integration/19763-1394934/.minikube/profiles/scheduled-stop-709157/pid: no such file or directory
I1007 12:29:56.892945 1400308 retry.go:31] will retry after 9.365598ms: open /home/jenkins/minikube-integration/19763-1394934/.minikube/profiles/scheduled-stop-709157/pid: no such file or directory
I1007 12:29:56.903181 1400308 retry.go:31] will retry after 13.733302ms: open /home/jenkins/minikube-integration/19763-1394934/.minikube/profiles/scheduled-stop-709157/pid: no such file or directory
I1007 12:29:56.917417 1400308 retry.go:31] will retry after 22.700071ms: open /home/jenkins/minikube-integration/19763-1394934/.minikube/profiles/scheduled-stop-709157/pid: no such file or directory
I1007 12:29:56.940530 1400308 retry.go:31] will retry after 26.224926ms: open /home/jenkins/minikube-integration/19763-1394934/.minikube/profiles/scheduled-stop-709157/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-709157 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-709157 -n scheduled-stop-709157
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-709157
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-709157 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-709157
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p scheduled-stop-709157: exit status 7 (72.240838ms)

                                                
                                                
-- stdout --
	scheduled-stop-709157
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-709157 -n scheduled-stop-709157
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-709157 -n scheduled-stop-709157: exit status 7 (75.969552ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-709157" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p scheduled-stop-709157
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p scheduled-stop-709157: (4.564352904s)
--- PASS: TestScheduledStopUnix (104.31s)

                                                
                                    
x
+
TestInsufficientStorage (10.44s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-arm64 start -p insufficient-storage-778936 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=containerd
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p insufficient-storage-778936 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=containerd: exit status 26 (7.966781412s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"f200b8bd-16aa-4d35-97b0-addc740ccb92","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-778936] minikube v1.34.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"0eeafca8-4b41-438e-a67d-b71a95fa112d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19763"}}
	{"specversion":"1.0","id":"c185c9b0-eb3b-498e-9919-92ba8e287eff","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"79085c29-087a-4b3f-9e41-7f6554db9bce","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19763-1394934/kubeconfig"}}
	{"specversion":"1.0","id":"c0b1bb85-890d-4c04-b420-6eb825969c8a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19763-1394934/.minikube"}}
	{"specversion":"1.0","id":"f50fda7c-dc8f-44fe-8e71-138deddc8e16","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"92642381-014d-49bd-aa41-ac0f23a05159","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"f96814f7-4580-4a6e-b99d-9e563ca727fb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"41640c29-33b6-472a-9a3f-51aba0b0705d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"dc1fc7fa-a393-476f-bc85-2f05a91e3595","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"9a5f97c6-2389-4baf-bf70-aad92bfff0b4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"aa056e60-5cf0-4adc-ab1b-99da73bdb09a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-778936\" primary control-plane node in \"insufficient-storage-778936\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"cb2262f5-e2a9-49ed-a2e8-9d025b717916","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.45-1727731891-master ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"3f9c9c80-5d20-4591-b10e-6e498b06aca6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"ccd33d93-069a-4df4-ac02-0e6e8f8cfc8e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-778936 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-778936 --output=json --layout=cluster: exit status 7 (279.541215ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-778936","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.34.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-778936","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1007 12:31:20.706360 1550777 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-778936" does not appear in /home/jenkins/minikube-integration/19763-1394934/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-778936 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-778936 --output=json --layout=cluster: exit status 7 (291.191493ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-778936","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.34.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-778936","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1007 12:31:20.997391 1550840 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-778936" does not appear in /home/jenkins/minikube-integration/19763-1394934/kubeconfig
	E1007 12:31:21.008865 1550840 status.go:258] unable to read event log: stat: stat /home/jenkins/minikube-integration/19763-1394934/.minikube/profiles/insufficient-storage-778936/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-778936" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p insufficient-storage-778936
E1007 12:31:21.754368 1400308 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19763-1394934/.minikube/profiles/functional-632459/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p insufficient-storage-778936: (1.897408052s)
--- PASS: TestInsufficientStorage (10.44s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (79.42s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.3840053481 start -p running-upgrade-483885 --memory=2200 --vm-driver=docker  --container-runtime=containerd
E1007 12:36:21.758325 1400308 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19763-1394934/.minikube/profiles/functional-632459/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.3840053481 start -p running-upgrade-483885 --memory=2200 --vm-driver=docker  --container-runtime=containerd: (46.772441214s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-arm64 start -p running-upgrade-483885 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
E1007 12:36:54.109795 1400308 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19763-1394934/.minikube/profiles/addons-268164/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-arm64 start -p running-upgrade-483885 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (29.079134579s)
helpers_test.go:175: Cleaning up "running-upgrade-483885" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p running-upgrade-483885
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p running-upgrade-483885: (2.906811229s)
--- PASS: TestRunningBinaryUpgrade (79.42s)

                                                
                                    
x
+
TestKubernetesUpgrade (348.83s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-323714 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-323714 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (1m4.34066414s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-arm64 stop -p kubernetes-upgrade-323714
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-arm64 stop -p kubernetes-upgrade-323714: (1.226002982s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-arm64 -p kubernetes-upgrade-323714 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-arm64 -p kubernetes-upgrade-323714 status --format={{.Host}}: exit status 7 (71.947137ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-323714 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-323714 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (4m33.880203837s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-323714 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-323714 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p kubernetes-upgrade-323714 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=containerd: exit status 106 (111.791697ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-323714] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19763
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19763-1394934/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19763-1394934/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.31.1 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-323714
	    minikube start -p kubernetes-upgrade-323714 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-3237142 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.31.1, by running:
	    
	    minikube start -p kubernetes-upgrade-323714 --kubernetes-version=v1.31.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-323714 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-323714 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (6.641875198s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-323714" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubernetes-upgrade-323714
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p kubernetes-upgrade-323714: (2.42619992s)
--- PASS: TestKubernetesUpgrade (348.83s)

                                                
                                    
x
+
TestMissingContainerUpgrade (183.68s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.26.0.2333548110 start -p missing-upgrade-950083 --memory=2200 --driver=docker  --container-runtime=containerd
E1007 12:31:54.110315 1400308 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19763-1394934/.minikube/profiles/addons-268164/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.26.0.2333548110 start -p missing-upgrade-950083 --memory=2200 --driver=docker  --container-runtime=containerd: (1m32.603701087s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-950083
version_upgrade_test.go:318: (dbg) Done: docker stop missing-upgrade-950083: (10.319990733s)
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-950083
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-arm64 start -p missing-upgrade-950083 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-arm64 start -p missing-upgrade-950083 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (1m17.711488006s)
helpers_test.go:175: Cleaning up "missing-upgrade-950083" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p missing-upgrade-950083
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p missing-upgrade-950083: (2.288689666s)
--- PASS: TestMissingContainerUpgrade (183.68s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.11s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-148035 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p NoKubernetes-148035 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=containerd: exit status 14 (113.783118ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-148035] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19763
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19763-1394934/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19763-1394934/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.11s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (40.08s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-148035 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-148035 --driver=docker  --container-runtime=containerd: (39.629520862s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-148035 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (40.08s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (20.64s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-148035 --no-kubernetes --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-148035 --no-kubernetes --driver=docker  --container-runtime=containerd: (18.484448773s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-148035 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-arm64 -p NoKubernetes-148035 status -o json: exit status 2 (300.051744ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-148035","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-arm64 delete -p NoKubernetes-148035
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-arm64 delete -p NoKubernetes-148035: (1.851840948s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (20.64s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (8.73s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-148035 --no-kubernetes --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-148035 --no-kubernetes --driver=docker  --container-runtime=containerd: (8.728339096s)
--- PASS: TestNoKubernetes/serial/Start (8.73s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.34s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-148035 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-148035 "sudo systemctl is-active --quiet service kubelet": exit status 1 (342.121806ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.34s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.22s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-arm64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-arm64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.22s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.27s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-arm64 stop -p NoKubernetes-148035
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-arm64 stop -p NoKubernetes-148035: (1.267393939s)
--- PASS: TestNoKubernetes/serial/Stop (1.27s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (7.86s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-148035 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-148035 --driver=docker  --container-runtime=containerd: (7.861081331s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (7.86s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.4s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-148035 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-148035 "sudo systemctl is-active --quiet service kubelet": exit status 1 (403.688333ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.40s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.84s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.84s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (78.04s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.1228853975 start -p stopped-upgrade-047993 --memory=2200 --vm-driver=docker  --container-runtime=containerd
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.1228853975 start -p stopped-upgrade-047993 --memory=2200 --vm-driver=docker  --container-runtime=containerd: (45.205907235s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.1228853975 -p stopped-upgrade-047993 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.1228853975 -p stopped-upgrade-047993 stop: (1.265994976s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-arm64 start -p stopped-upgrade-047993 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-arm64 start -p stopped-upgrade-047993 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (31.56677536s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (78.04s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.19s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-arm64 logs -p stopped-upgrade-047993
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-arm64 logs -p stopped-upgrade-047993: (1.19174392s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.19s)

                                                
                                    
x
+
TestPause/serial/Start (82.99s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -p pause-535221 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=containerd
pause_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -p pause-535221 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=containerd: (1m22.991248623s)
--- PASS: TestPause/serial/Start (82.99s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (7.3s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-arm64 start -p pause-535221 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
pause_test.go:92: (dbg) Done: out/minikube-linux-arm64 start -p pause-535221 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (7.282625548s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (7.30s)

                                                
                                    
x
+
TestPause/serial/Pause (0.87s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-535221 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.87s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.36s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p pause-535221 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p pause-535221 --output=json --layout=cluster: exit status 2 (360.650847ms)

                                                
                                                
-- stdout --
	{"Name":"pause-535221","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 7 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.34.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-535221","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.36s)

                                                
                                    
x
+
TestPause/serial/Unpause (1.2s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-arm64 unpause -p pause-535221 --alsologtostderr -v=5
pause_test.go:121: (dbg) Done: out/minikube-linux-arm64 unpause -p pause-535221 --alsologtostderr -v=5: (1.196352902s)
--- PASS: TestPause/serial/Unpause (1.20s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (1.61s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-535221 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-linux-arm64 pause -p pause-535221 --alsologtostderr -v=5: (1.608128522s)
--- PASS: TestPause/serial/PauseAgain (1.61s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (2.99s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p pause-535221 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p pause-535221 --alsologtostderr -v=5: (2.985133517s)
--- PASS: TestPause/serial/DeletePaused (2.99s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.55s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-535221
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-535221: exit status 1 (20.283159ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-535221: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (0.55s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (4.8s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-arm64 start -p false-798986 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=containerd
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p false-798986 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=containerd: exit status 14 (267.883602ms)

                                                
                                                
-- stdout --
	* [false-798986] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19763
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19763-1394934/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19763-1394934/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1007 12:38:51.415693 1590861 out.go:345] Setting OutFile to fd 1 ...
	I1007 12:38:51.415897 1590861 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 12:38:51.415924 1590861 out.go:358] Setting ErrFile to fd 2...
	I1007 12:38:51.415961 1590861 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 12:38:51.416330 1590861 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19763-1394934/.minikube/bin
	I1007 12:38:51.416850 1590861 out.go:352] Setting JSON to false
	I1007 12:38:51.417921 1590861 start.go:129] hostinfo: {"hostname":"ip-172-31-29-130","uptime":94883,"bootTime":1728209849,"procs":190,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I1007 12:38:51.418042 1590861 start.go:139] virtualization:  
	I1007 12:38:51.430901 1590861 out.go:177] * [false-798986] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	I1007 12:38:51.433930 1590861 out.go:177]   - MINIKUBE_LOCATION=19763
	I1007 12:38:51.434123 1590861 notify.go:220] Checking for updates...
	I1007 12:38:51.437479 1590861 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1007 12:38:51.440213 1590861 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19763-1394934/kubeconfig
	I1007 12:38:51.442972 1590861 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19763-1394934/.minikube
	I1007 12:38:51.449405 1590861 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1007 12:38:51.452167 1590861 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1007 12:38:51.455622 1590861 config.go:182] Loaded profile config "force-systemd-flag-448988": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I1007 12:38:51.455791 1590861 driver.go:394] Setting default libvirt URI to qemu:///system
	I1007 12:38:51.506021 1590861 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I1007 12:38:51.506151 1590861 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1007 12:38:51.589045 1590861 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:33 OomKillDisable:true NGoroutines:52 SystemTime:2024-10-07 12:38:51.571337828 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I1007 12:38:51.589166 1590861 docker.go:318] overlay module found
	I1007 12:38:51.592167 1590861 out.go:177] * Using the docker driver based on user configuration
	I1007 12:38:51.594526 1590861 start.go:297] selected driver: docker
	I1007 12:38:51.594543 1590861 start.go:901] validating driver "docker" against <nil>
	I1007 12:38:51.594557 1590861 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1007 12:38:51.597627 1590861 out.go:201] 
	W1007 12:38:51.600071 1590861 out.go:270] X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	I1007 12:38:51.602555 1590861 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-798986 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-798986

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-798986

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-798986

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-798986

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-798986

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-798986

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-798986

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-798986

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-798986

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-798986

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-798986" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-798986"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-798986" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-798986"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-798986" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-798986"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-798986

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-798986" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-798986"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-798986" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-798986"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-798986" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-798986" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-798986" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-798986" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-798986" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-798986" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-798986" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-798986" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-798986" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-798986"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-798986" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-798986"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-798986" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-798986"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-798986" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-798986"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-798986" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-798986"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-798986" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-798986" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-798986" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-798986" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-798986"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-798986" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-798986"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-798986" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-798986"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-798986" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-798986"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-798986" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-798986"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-798986

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-798986" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-798986"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-798986" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-798986"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-798986" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-798986"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-798986" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-798986"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-798986" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-798986"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-798986" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-798986"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-798986" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-798986"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-798986" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-798986"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-798986" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-798986"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-798986" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-798986"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-798986" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-798986"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-798986" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-798986"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-798986" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-798986"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-798986" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-798986"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-798986" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-798986"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-798986" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-798986"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-798986" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-798986"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-798986" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-798986"

                                                
                                                
----------------------- debugLogs end: false-798986 [took: 4.295707347s] --------------------------------
helpers_test.go:175: Cleaning up "false-798986" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p false-798986
--- PASS: TestNetworkPlugins/group/false (4.80s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (118.9s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-130031 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.20.0
E1007 12:41:21.753819 1400308 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19763-1394934/.minikube/profiles/functional-632459/client.crt: no such file or directory" logger="UnhandledError"
E1007 12:41:54.109785 1400308 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19763-1394934/.minikube/profiles/addons-268164/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-130031 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.20.0: (1m58.903585758s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (118.90s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (10.68s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-130031 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [3ecba728-c5a4-43a9-ae87-1431b28d35d9] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [3ecba728-c5a4-43a9-ae87-1431b28d35d9] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 10.003284832s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-130031 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (10.68s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.27s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-130031 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-130031 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.128424844s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-130031 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.27s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (12.17s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p old-k8s-version-130031 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p old-k8s-version-130031 --alsologtostderr -v=3: (12.166302651s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (12.17s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-130031 -n old-k8s-version-130031
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-130031 -n old-k8s-version-130031: exit status 7 (74.298543ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p old-k8s-version-130031 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.22s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (66.98s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-842812 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-842812 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1: (1m6.983834615s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (66.98s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (9.37s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-842812 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [88895f9a-0ab7-4dae-b8c2-54cbecfbd53b] Pending
helpers_test.go:344: "busybox" [88895f9a-0ab7-4dae-b8c2-54cbecfbd53b] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [88895f9a-0ab7-4dae-b8c2-54cbecfbd53b] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 9.004412028s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-842812 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (9.37s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.2s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p no-preload-842812 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E1007 12:44:24.820152 1400308 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19763-1394934/.minikube/profiles/functional-632459/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p no-preload-842812 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.098564473s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-842812 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.20s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (12.18s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p no-preload-842812 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p no-preload-842812 --alsologtostderr -v=3: (12.177884648s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (12.18s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-842812 -n no-preload-842812
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-842812 -n no-preload-842812: exit status 7 (81.629837ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p no-preload-842812 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (289.31s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-842812 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1
E1007 12:46:21.753791 1400308 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19763-1394934/.minikube/profiles/functional-632459/client.crt: no such file or directory" logger="UnhandledError"
E1007 12:46:54.109601 1400308 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19763-1394934/.minikube/profiles/addons-268164/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-842812 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1: (4m48.88641405s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-842812 -n no-preload-842812
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (289.31s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-bjf22" [ab9bc651-37de-4bb1-97e7-f1bf5cd9e38f] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004525137s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.11s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-bjf22" [ab9bc651-37de-4bb1-97e7-f1bf5cd9e38f] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.00409431s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-130031 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.11s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-130031 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240813-c6f155d6
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240202-8f1494ea
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (3.25s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p old-k8s-version-130031 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-130031 -n old-k8s-version-130031
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-130031 -n old-k8s-version-130031: exit status 2 (322.574622ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-130031 -n old-k8s-version-130031
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-130031 -n old-k8s-version-130031: exit status 2 (321.897199ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p old-k8s-version-130031 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-130031 -n old-k8s-version-130031
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-130031 -n old-k8s-version-130031
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (3.25s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (51.13s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-941020 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-941020 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1: (51.128757005s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (51.13s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-2kglj" [16ba983a-13dc-4956-bc18-c8c74acba8b6] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.005250626s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.14s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-2kglj" [16ba983a-13dc-4956-bc18-c8c74acba8b6] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.005300206s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-842812 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.14s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.3s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-842812 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240813-c6f155d6
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.30s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (4.08s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p no-preload-842812 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-arm64 pause -p no-preload-842812 --alsologtostderr -v=1: (1.014793665s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-842812 -n no-preload-842812
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-842812 -n no-preload-842812: exit status 2 (324.972938ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-842812 -n no-preload-842812
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-842812 -n no-preload-842812: exit status 2 (401.994104ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p no-preload-842812 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-arm64 unpause -p no-preload-842812 --alsologtostderr -v=1: (1.175344862s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-842812 -n no-preload-842812
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-842812 -n no-preload-842812
--- PASS: TestStartStop/group/no-preload/serial/Pause (4.08s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (53.57s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-166656 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-166656 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1: (53.573403775s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (53.57s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (9.38s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-941020 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [a8002468-5b92-4a5d-ac9f-d9064bc1dc74] Pending
helpers_test.go:344: "busybox" [a8002468-5b92-4a5d-ac9f-d9064bc1dc74] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [a8002468-5b92-4a5d-ac9f-d9064bc1dc74] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 9.007327946s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-941020 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (9.38s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.35s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-941020 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-941020 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.198369209s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-941020 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.35s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (12.25s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p embed-certs-941020 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p embed-certs-941020 --alsologtostderr -v=3: (12.245820504s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (12.25s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-941020 -n embed-certs-941020
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-941020 -n embed-certs-941020: exit status 7 (77.235348ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p embed-certs-941020 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (266.56s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-941020 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-941020 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1: (4m26.174878618s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-941020 -n embed-certs-941020
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (266.56s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.5s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-166656 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [ca07a2b0-a8af-4971-8c5c-af7cbbb72cdc] Pending
helpers_test.go:344: "busybox" [ca07a2b0-a8af-4971-8c5c-af7cbbb72cdc] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [ca07a2b0-a8af-4971-8c5c-af7cbbb72cdc] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 9.004928805s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-166656 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.50s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.38s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-166656 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-166656 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.2565337s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-166656 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.38s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (12.53s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p default-k8s-diff-port-166656 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p default-k8s-diff-port-166656 --alsologtostderr -v=3: (12.534409586s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (12.53s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-166656 -n default-k8s-diff-port-166656
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-166656 -n default-k8s-diff-port-166656: exit status 7 (83.916868ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p default-k8s-diff-port-166656 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (267.94s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-166656 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1
E1007 12:51:21.754201 1400308 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19763-1394934/.minikube/profiles/functional-632459/client.crt: no such file or directory" logger="UnhandledError"
E1007 12:51:54.109794 1400308 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19763-1394934/.minikube/profiles/addons-268164/client.crt: no such file or directory" logger="UnhandledError"
E1007 12:52:20.876386 1400308 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19763-1394934/.minikube/profiles/old-k8s-version-130031/client.crt: no such file or directory" logger="UnhandledError"
E1007 12:52:20.882835 1400308 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19763-1394934/.minikube/profiles/old-k8s-version-130031/client.crt: no such file or directory" logger="UnhandledError"
E1007 12:52:20.894354 1400308 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19763-1394934/.minikube/profiles/old-k8s-version-130031/client.crt: no such file or directory" logger="UnhandledError"
E1007 12:52:20.915779 1400308 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19763-1394934/.minikube/profiles/old-k8s-version-130031/client.crt: no such file or directory" logger="UnhandledError"
E1007 12:52:20.957133 1400308 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19763-1394934/.minikube/profiles/old-k8s-version-130031/client.crt: no such file or directory" logger="UnhandledError"
E1007 12:52:21.038597 1400308 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19763-1394934/.minikube/profiles/old-k8s-version-130031/client.crt: no such file or directory" logger="UnhandledError"
E1007 12:52:21.200135 1400308 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19763-1394934/.minikube/profiles/old-k8s-version-130031/client.crt: no such file or directory" logger="UnhandledError"
E1007 12:52:21.521815 1400308 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19763-1394934/.minikube/profiles/old-k8s-version-130031/client.crt: no such file or directory" logger="UnhandledError"
E1007 12:52:22.163136 1400308 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19763-1394934/.minikube/profiles/old-k8s-version-130031/client.crt: no such file or directory" logger="UnhandledError"
E1007 12:52:23.444806 1400308 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19763-1394934/.minikube/profiles/old-k8s-version-130031/client.crt: no such file or directory" logger="UnhandledError"
E1007 12:52:26.007951 1400308 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19763-1394934/.minikube/profiles/old-k8s-version-130031/client.crt: no such file or directory" logger="UnhandledError"
E1007 12:52:31.130105 1400308 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19763-1394934/.minikube/profiles/old-k8s-version-130031/client.crt: no such file or directory" logger="UnhandledError"
E1007 12:52:41.372119 1400308 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19763-1394934/.minikube/profiles/old-k8s-version-130031/client.crt: no such file or directory" logger="UnhandledError"
E1007 12:53:01.854115 1400308 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19763-1394934/.minikube/profiles/old-k8s-version-130031/client.crt: no such file or directory" logger="UnhandledError"
E1007 12:53:42.815713 1400308 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19763-1394934/.minikube/profiles/old-k8s-version-130031/client.crt: no such file or directory" logger="UnhandledError"
E1007 12:54:14.621939 1400308 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19763-1394934/.minikube/profiles/no-preload-842812/client.crt: no such file or directory" logger="UnhandledError"
E1007 12:54:14.628302 1400308 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19763-1394934/.minikube/profiles/no-preload-842812/client.crt: no such file or directory" logger="UnhandledError"
E1007 12:54:14.639523 1400308 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19763-1394934/.minikube/profiles/no-preload-842812/client.crt: no such file or directory" logger="UnhandledError"
E1007 12:54:14.660897 1400308 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19763-1394934/.minikube/profiles/no-preload-842812/client.crt: no such file or directory" logger="UnhandledError"
E1007 12:54:14.702300 1400308 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19763-1394934/.minikube/profiles/no-preload-842812/client.crt: no such file or directory" logger="UnhandledError"
E1007 12:54:14.783669 1400308 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19763-1394934/.minikube/profiles/no-preload-842812/client.crt: no such file or directory" logger="UnhandledError"
E1007 12:54:14.945136 1400308 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19763-1394934/.minikube/profiles/no-preload-842812/client.crt: no such file or directory" logger="UnhandledError"
E1007 12:54:15.266776 1400308 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19763-1394934/.minikube/profiles/no-preload-842812/client.crt: no such file or directory" logger="UnhandledError"
E1007 12:54:15.908818 1400308 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19763-1394934/.minikube/profiles/no-preload-842812/client.crt: no such file or directory" logger="UnhandledError"
E1007 12:54:17.190134 1400308 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19763-1394934/.minikube/profiles/no-preload-842812/client.crt: no such file or directory" logger="UnhandledError"
E1007 12:54:19.751966 1400308 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19763-1394934/.minikube/profiles/no-preload-842812/client.crt: no such file or directory" logger="UnhandledError"
E1007 12:54:24.874100 1400308 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19763-1394934/.minikube/profiles/no-preload-842812/client.crt: no such file or directory" logger="UnhandledError"
E1007 12:54:35.115780 1400308 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19763-1394934/.minikube/profiles/no-preload-842812/client.crt: no such file or directory" logger="UnhandledError"
E1007 12:54:55.597839 1400308 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19763-1394934/.minikube/profiles/no-preload-842812/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-166656 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1: (4m27.471868846s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-166656 -n default-k8s-diff-port-166656
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (267.94s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-zw9dl" [5dc441b0-31b1-4cd3-89ce-bffb71799bd5] Running
E1007 12:55:04.737927 1400308 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19763-1394934/.minikube/profiles/old-k8s-version-130031/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004713907s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.12s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-zw9dl" [5dc441b0-31b1-4cd3-89ce-bffb71799bd5] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004132367s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-941020 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.12s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.3s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-941020 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240813-c6f155d6
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.30s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (3.12s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p embed-certs-941020 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-941020 -n embed-certs-941020
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-941020 -n embed-certs-941020: exit status 2 (341.550467ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-941020 -n embed-certs-941020
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-941020 -n embed-certs-941020: exit status 2 (327.059793ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p embed-certs-941020 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-941020 -n embed-certs-941020
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-941020 -n embed-certs-941020
--- PASS: TestStartStop/group/embed-certs/serial/Pause (3.12s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (38.27s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-038413 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-038413 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1: (38.270163887s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (38.27s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-545xz" [bcbcc464-c1a9-4f09-b1ae-0c923a7a1829] Running
E1007 12:55:36.559209 1400308 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19763-1394934/.minikube/profiles/no-preload-842812/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.00440859s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.16s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-545xz" [bcbcc464-c1a9-4f09-b1ae-0c923a7a1829] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004831378s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-166656 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.16s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.36s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-166656 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240813-c6f155d6
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.36s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (4.42s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p default-k8s-diff-port-166656 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-arm64 pause -p default-k8s-diff-port-166656 --alsologtostderr -v=1: (1.209933195s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-166656 -n default-k8s-diff-port-166656
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-166656 -n default-k8s-diff-port-166656: exit status 2 (420.31831ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-166656 -n default-k8s-diff-port-166656
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-166656 -n default-k8s-diff-port-166656: exit status 2 (418.556559ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p default-k8s-diff-port-166656 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-arm64 unpause -p default-k8s-diff-port-166656 --alsologtostderr -v=1: (1.225603498s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-166656 -n default-k8s-diff-port-166656
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-166656 -n default-k8s-diff-port-166656
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (4.42s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (58s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p auto-798986 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p auto-798986 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=containerd: (58.004378439s)
--- PASS: TestNetworkPlugins/group/auto/Start (58.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.43s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-038413 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-038413 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.428307966s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.43s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (1.45s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p newest-cni-038413 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p newest-cni-038413 --alsologtostderr -v=3: (1.44746783s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (1.45s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.44s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-038413 -n newest-cni-038413
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-038413 -n newest-cni-038413: exit status 7 (220.669315ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p newest-cni-038413 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.44s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (23.61s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-038413 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1
E1007 12:56:21.754156 1400308 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19763-1394934/.minikube/profiles/functional-632459/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-038413 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1: (23.137132783s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-038413 -n newest-cni-038413
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (23.61s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.33s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-038413 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240813-c6f155d6
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.33s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (3.85s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p newest-cni-038413 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-arm64 pause -p newest-cni-038413 --alsologtostderr -v=1: (1.102984662s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-038413 -n newest-cni-038413
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-038413 -n newest-cni-038413: exit status 2 (487.797491ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-038413 -n newest-cni-038413
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-038413 -n newest-cni-038413: exit status 2 (394.608011ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p newest-cni-038413 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-038413 -n newest-cni-038413
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-038413 -n newest-cni-038413
--- PASS: TestStartStop/group/newest-cni/serial/Pause (3.85s)
E1007 13:01:49.540851 1400308 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19763-1394934/.minikube/profiles/auto-798986/client.crt: no such file or directory" logger="UnhandledError"
E1007 13:01:49.547470 1400308 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19763-1394934/.minikube/profiles/auto-798986/client.crt: no such file or directory" logger="UnhandledError"
E1007 13:01:49.558899 1400308 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19763-1394934/.minikube/profiles/auto-798986/client.crt: no such file or directory" logger="UnhandledError"
E1007 13:01:49.580363 1400308 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19763-1394934/.minikube/profiles/auto-798986/client.crt: no such file or directory" logger="UnhandledError"
E1007 13:01:49.621695 1400308 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19763-1394934/.minikube/profiles/auto-798986/client.crt: no such file or directory" logger="UnhandledError"
E1007 13:01:49.703324 1400308 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19763-1394934/.minikube/profiles/auto-798986/client.crt: no such file or directory" logger="UnhandledError"
E1007 13:01:49.864917 1400308 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19763-1394934/.minikube/profiles/auto-798986/client.crt: no such file or directory" logger="UnhandledError"
E1007 13:01:50.187218 1400308 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19763-1394934/.minikube/profiles/auto-798986/client.crt: no such file or directory" logger="UnhandledError"
E1007 13:01:50.828918 1400308 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19763-1394934/.minikube/profiles/auto-798986/client.crt: no such file or directory" logger="UnhandledError"

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (65.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p kindnet-798986 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=containerd
E1007 12:56:37.180284 1400308 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19763-1394934/.minikube/profiles/addons-268164/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p kindnet-798986 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=containerd: (1m5.103665044s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (65.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.44s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p auto-798986 "pgrep -a kubelet"
I1007 12:56:49.207173 1400308 config.go:182] Loaded profile config "auto-798986": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.44s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (11.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-798986 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-hpn9t" [9d995143-7ab5-4f28-8cbb-856d25e62c26] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1007 12:56:54.109386 1400308 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19763-1394934/.minikube/profiles/addons-268164/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "netcat-6fc964789b-hpn9t" [9d995143-7ab5-4f28-8cbb-856d25e62c26] Running
E1007 12:56:58.481109 1400308 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19763-1394934/.minikube/profiles/no-preload-842812/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 11.010265856s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (11.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-798986 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-798986 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-798986 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (70.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p calico-798986 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p calico-798986 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=containerd: (1m10.192229503s)
--- PASS: TestNetworkPlugins/group/calico/Start (70.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-xbw9k" [0887479f-3c36-4c70-bfd0-1021e2da32d6] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.005608818s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.42s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p kindnet-798986 "pgrep -a kubelet"
I1007 12:57:40.676104 1400308 config.go:182] Loaded profile config "kindnet-798986": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.42s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (10.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-798986 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-s2mp9" [cc8df99b-9ca5-4809-9722-4ad4af81f7c9] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-s2mp9" [cc8df99b-9ca5-4809-9722-4ad4af81f7c9] Running
E1007 12:57:48.580224 1400308 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19763-1394934/.minikube/profiles/old-k8s-version-130031/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 10.007695473s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (10.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-798986 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-798986 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-798986 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (57.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-flannel-798986 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-flannel-798986 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=containerd: (57.020878922s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (57.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-8hfwr" [b0cbe94e-565a-4c27-90b0-6e95fc18be4b] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.005011379s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p calico-798986 "pgrep -a kubelet"
I1007 12:58:38.580717 1400308 config.go:182] Loaded profile config "calico-798986": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (11.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-798986 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-gwqht" [c286561b-08f0-4769-ae04-1ae877d16b08] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-gwqht" [c286561b-08f0-4769-ae04-1ae877d16b08] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 11.00479598s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (11.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-798986 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-798986 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-798986 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p custom-flannel-798986 "pgrep -a kubelet"
I1007 12:59:13.665520 1400308 config.go:182] Loaded profile config "custom-flannel-798986": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (10.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-798986 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-sfjkl" [59472711-8d74-478a-9875-11c8bbb2909c] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1007 12:59:14.621977 1400308 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19763-1394934/.minikube/profiles/no-preload-842812/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "netcat-6fc964789b-sfjkl" [59472711-8d74-478a-9875-11c8bbb2909c] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 10.004747236s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (10.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (76.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p enable-default-cni-798986 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p enable-default-cni-798986 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=containerd: (1m16.294198744s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (76.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-798986 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-798986 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-798986 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (49.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p flannel-798986 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p flannel-798986 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=containerd: (49.189411363s)
--- PASS: TestNetworkPlugins/group/flannel/Start (49.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p enable-default-cni-798986 "pgrep -a kubelet"
I1007 13:00:30.385990 1400308 config.go:182] Loaded profile config "enable-default-cni-798986": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-798986 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-dff8j" [1f20e64a-18d2-43ca-bfe8-57d61ccdd8af] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-dff8j" [1f20e64a-18d2-43ca-bfe8-57d61ccdd8af] Running
E1007 13:00:39.814339 1400308 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19763-1394934/.minikube/profiles/default-k8s-diff-port-166656/client.crt: no such file or directory" logger="UnhandledError"
E1007 13:00:39.820697 1400308 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19763-1394934/.minikube/profiles/default-k8s-diff-port-166656/client.crt: no such file or directory" logger="UnhandledError"
E1007 13:00:39.832061 1400308 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19763-1394934/.minikube/profiles/default-k8s-diff-port-166656/client.crt: no such file or directory" logger="UnhandledError"
E1007 13:00:39.854125 1400308 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19763-1394934/.minikube/profiles/default-k8s-diff-port-166656/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 10.004190836s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-q7pkb" [c6e46c57-1943-4015-a8d0-74ec613a3153] Running
E1007 13:00:39.898826 1400308 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19763-1394934/.minikube/profiles/default-k8s-diff-port-166656/client.crt: no such file or directory" logger="UnhandledError"
E1007 13:00:39.980739 1400308 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19763-1394934/.minikube/profiles/default-k8s-diff-port-166656/client.crt: no such file or directory" logger="UnhandledError"
E1007 13:00:40.142273 1400308 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19763-1394934/.minikube/profiles/default-k8s-diff-port-166656/client.crt: no such file or directory" logger="UnhandledError"
E1007 13:00:40.464099 1400308 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19763-1394934/.minikube/profiles/default-k8s-diff-port-166656/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.003865777s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-798986 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-798986 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-798986 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
E1007 13:00:41.106213 1400308 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19763-1394934/.minikube/profiles/default-k8s-diff-port-166656/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p flannel-798986 "pgrep -a kubelet"
I1007 13:00:46.172768 1400308 config.go:182] Loaded profile config "flannel-798986": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (10.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-798986 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-4dstl" [50200a25-2eec-4c46-9f3a-800442a7f034] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1007 13:00:50.071821 1400308 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19763-1394934/.minikube/profiles/default-k8s-diff-port-166656/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "netcat-6fc964789b-4dstl" [50200a25-2eec-4c46-9f3a-800442a7f034] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 10.004574457s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (10.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-798986 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-798986 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-798986 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (48.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p bridge-798986 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=containerd
E1007 13:01:04.822320 1400308 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19763-1394934/.minikube/profiles/functional-632459/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p bridge-798986 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=containerd: (48.342400431s)
--- PASS: TestNetworkPlugins/group/bridge/Start (48.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p bridge-798986 "pgrep -a kubelet"
I1007 13:01:51.186381 1400308 config.go:182] Loaded profile config "bridge-798986": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (9.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-798986 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-d6r7g" [cd673e83-8458-48db-9b6e-5af679c473da] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1007 13:01:52.111219 1400308 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19763-1394934/.minikube/profiles/auto-798986/client.crt: no such file or directory" logger="UnhandledError"
E1007 13:01:54.109574 1400308 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19763-1394934/.minikube/profiles/addons-268164/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "netcat-6fc964789b-d6r7g" [cd673e83-8458-48db-9b6e-5af679c473da] Running
E1007 13:01:54.673640 1400308 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19763-1394934/.minikube/profiles/auto-798986/client.crt: no such file or directory" logger="UnhandledError"
E1007 13:01:59.795008 1400308 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19763-1394934/.minikube/profiles/auto-798986/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 9.004837753s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (9.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-798986 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-798986 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-798986 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.14s)

                                                
                                    

Test skip (27/328)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.31.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.59s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:232: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p download-docker-125049 --alsologtostderr --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:244: Skip for arm64 platform. See https://github.com/kubernetes/minikube/issues/10144
helpers_test.go:175: Cleaning up "download-docker-125049" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p download-docker-125049
--- SKIP: TestDownloadOnlyKic (0.59s)

                                                
                                    
x
+
TestOffline (0s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:35: skipping TestOffline - only docker runtime supported on arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestOffline (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:422: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing containerd
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:45: Skip if arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1787: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:463: only validate docker env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:550: only validate podman env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing containerd container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-908296" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p disable-driver-mounts-908296
--- SKIP: TestStartStop/group/disable-driver-mounts (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (4.72s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as containerd container runtimes requires CNI
panic.go:629: 
----------------------- debugLogs start: kubenet-798986 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-798986

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-798986

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-798986

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-798986

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-798986

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-798986

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-798986

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-798986

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-798986

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-798986

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-798986" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-798986"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-798986" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-798986"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-798986" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-798986"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-798986

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-798986" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-798986"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-798986" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-798986"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-798986" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-798986" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-798986" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-798986" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-798986" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-798986" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-798986" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-798986" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-798986" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-798986"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-798986" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-798986"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-798986" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-798986"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-798986" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-798986"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-798986" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-798986"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-798986" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-798986" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-798986" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-798986" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-798986"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-798986" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-798986"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-798986" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-798986"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-798986" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-798986"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-798986" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-798986"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-798986

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-798986" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-798986"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-798986" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-798986"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-798986" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-798986"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-798986" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-798986"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-798986" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-798986"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-798986" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-798986"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-798986" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-798986"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-798986" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-798986"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-798986" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-798986"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-798986" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-798986"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-798986" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-798986"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-798986" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-798986"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-798986" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-798986"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-798986" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-798986"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-798986" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-798986"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-798986" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-798986"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-798986" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-798986"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-798986" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-798986"

                                                
                                                
----------------------- debugLogs end: kubenet-798986 [took: 4.519091186s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-798986" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubenet-798986
--- SKIP: TestNetworkPlugins/group/kubenet (4.72s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (5.72s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:629: 
----------------------- debugLogs start: cilium-798986 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-798986

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-798986

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-798986

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-798986

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-798986

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-798986

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-798986

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-798986

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-798986

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-798986

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-798986" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-798986"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-798986" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-798986"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-798986" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-798986"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-798986

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-798986" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-798986"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-798986" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-798986"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-798986" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-798986" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-798986" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-798986" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-798986" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-798986" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-798986" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-798986" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-798986" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-798986"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-798986" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-798986"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-798986" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-798986"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-798986" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-798986"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-798986" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-798986"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-798986

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-798986

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-798986" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-798986" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-798986

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-798986

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-798986" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-798986" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-798986" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-798986" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-798986" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-798986" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-798986"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-798986" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-798986"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-798986" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-798986"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-798986" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-798986"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-798986" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-798986"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-798986

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-798986" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-798986"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-798986" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-798986"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-798986" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-798986"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-798986" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-798986"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-798986" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-798986"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-798986" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-798986"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-798986" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-798986"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-798986" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-798986"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-798986" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-798986"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-798986" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-798986"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-798986" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-798986"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-798986" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-798986"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-798986" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-798986"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-798986" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-798986"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-798986" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-798986"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-798986" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-798986"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-798986" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-798986"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-798986" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-798986"

                                                
                                                
----------------------- debugLogs end: cilium-798986 [took: 5.492669784s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-798986" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cilium-798986
--- SKIP: TestNetworkPlugins/group/cilium (5.72s)

                                                
                                    
Copied to clipboard