Test Report: Docker_Linux_containerd_arm64 19476

                    
                      5d2be5ad06c5c8c1678cb56a2620c3837d13735d:2024-08-19:35852
                    
                

Test fail (2/328)

Order failed test Duration
29 TestAddons/serial/Volcano 201.76
302 TestStartStop/group/old-k8s-version/serial/SecondStart 375.74
x
+
TestAddons/serial/Volcano (201.76s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:897: volcano-scheduler stabilized in 48.784491ms
addons_test.go:913: volcano-controller stabilized in 48.937257ms
addons_test.go:905: volcano-admission stabilized in 48.979832ms
addons_test.go:919: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-scheduler-576bc46687-9ptz5" [2546015d-8f4f-4fba-a9bb-1343c2b8c977] Running
addons_test.go:919: (dbg) TestAddons/serial/Volcano: app=volcano-scheduler healthy within 6.003471335s
addons_test.go:923: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-admission-77d7d48b68-6wjtz" [af59167e-98a6-47ac-b019-87ebb45709e1] Running
addons_test.go:923: (dbg) TestAddons/serial/Volcano: app=volcano-admission healthy within 6.00440202s
addons_test.go:927: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-controllers-56675bb4d5-z7wgv" [7ecd4a91-4114-4ad8-b086-ce2ad728b01c] Running
addons_test.go:927: (dbg) TestAddons/serial/Volcano: app=volcano-controller healthy within 6.004397228s
addons_test.go:932: (dbg) Run:  kubectl --context addons-288312 delete -n volcano-system job volcano-admission-init
addons_test.go:938: (dbg) Run:  kubectl --context addons-288312 create -f testdata/vcjob.yaml
addons_test.go:946: (dbg) Run:  kubectl --context addons-288312 get vcjob -n my-volcano
addons_test.go:964: (dbg) TestAddons/serial/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:344: "test-job-nginx-0" [8e538506-0172-4cc0-9808-499bf05eeb94] Pending
helpers_test.go:344: "test-job-nginx-0" [8e538506-0172-4cc0-9808-499bf05eeb94] Pending: PodScheduled:Unschedulable (0/1 nodes are unavailable: 1 Insufficient cpu.)
helpers_test.go:329: TestAddons/serial/Volcano: WARNING: pod list for "my-volcano" "volcano.sh/job-name=test-job" returned: client rate limiter Wait returned an error: context deadline exceeded
addons_test.go:964: ***** TestAddons/serial/Volcano: pod "volcano.sh/job-name=test-job" failed to start within 3m0s: context deadline exceeded ****
addons_test.go:964: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-288312 -n addons-288312
addons_test.go:964: TestAddons/serial/Volcano: showing logs for failed pods as of 2024-08-19 11:38:26.227484991 +0000 UTC m=+431.096784583
addons_test.go:964: (dbg) Run:  kubectl --context addons-288312 describe po test-job-nginx-0 -n my-volcano
addons_test.go:964: (dbg) kubectl --context addons-288312 describe po test-job-nginx-0 -n my-volcano:
Name:             test-job-nginx-0
Namespace:        my-volcano
Priority:         0
Service Account:  default
Node:             <none>
Labels:           volcano.sh/job-name=test-job
volcano.sh/job-namespace=my-volcano
volcano.sh/queue-name=test
volcano.sh/task-index=0
volcano.sh/task-spec=nginx
Annotations:      scheduling.k8s.io/group-name: test-job-e7b6b5bc-2dc0-42c3-bcb4-4bae62002511
volcano.sh/job-name: test-job
volcano.sh/job-version: 0
volcano.sh/queue-name: test
volcano.sh/task-index: 0
volcano.sh/task-spec: nginx
volcano.sh/template-uid: test-job-nginx
Status:           Pending
IP:               
IPs:              <none>
Controlled By:    Job/test-job
Containers:
nginx:
Image:      nginx:latest
Port:       <none>
Host Port:  <none>
Command:
sleep
10m
Limits:
cpu:  1
Requests:
cpu:  1
Environment:
GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
PROJECT_ID:                      this_is_fake
GCP_PROJECT:                     this_is_fake
GCLOUD_PROJECT:                  this_is_fake
GOOGLE_CLOUD_PROJECT:            this_is_fake
CLOUDSDK_CORE_PROJECT:           this_is_fake
Mounts:
/google-app-creds.json from gcp-creds (ro)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-gcgqq (ro)
Conditions:
Type           Status
PodScheduled   False 
Volumes:
kube-api-access-gcgqq:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
ConfigMapOptional:       <nil>
DownwardAPI:             true
gcp-creds:
Type:          HostPath (bare host directory volume)
Path:          /var/lib/minikube/google_application_credentials.json
HostPathType:  File
QoS Class:         Burstable
Node-Selectors:    <none>
Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason            Age   From     Message
----     ------            ----  ----     -------
Warning  FailedScheduling  3m    volcano  0/1 nodes are unavailable: 1 Insufficient cpu.
addons_test.go:964: (dbg) Run:  kubectl --context addons-288312 logs test-job-nginx-0 -n my-volcano
addons_test.go:964: (dbg) kubectl --context addons-288312 logs test-job-nginx-0 -n my-volcano:
addons_test.go:965: failed waiting for test-local-path pod: volcano.sh/job-name=test-job within 3m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/serial/Volcano]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-288312
helpers_test.go:235: (dbg) docker inspect addons-288312:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "c0cd3d6c3fe06f20a5a0ecf0c6b3841f0c3863d98c90955da32f0f84525c905a",
	        "Created": "2024-08-19T11:31:53.782706323Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 300456,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-08-19T11:31:53.918611438Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:decdd59746a9dba10062a73f6cd4b910c7b4e60613660b1022f8357747681c4d",
	        "ResolvConfPath": "/var/lib/docker/containers/c0cd3d6c3fe06f20a5a0ecf0c6b3841f0c3863d98c90955da32f0f84525c905a/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/c0cd3d6c3fe06f20a5a0ecf0c6b3841f0c3863d98c90955da32f0f84525c905a/hostname",
	        "HostsPath": "/var/lib/docker/containers/c0cd3d6c3fe06f20a5a0ecf0c6b3841f0c3863d98c90955da32f0f84525c905a/hosts",
	        "LogPath": "/var/lib/docker/containers/c0cd3d6c3fe06f20a5a0ecf0c6b3841f0c3863d98c90955da32f0f84525c905a/c0cd3d6c3fe06f20a5a0ecf0c6b3841f0c3863d98c90955da32f0f84525c905a-json.log",
	        "Name": "/addons-288312",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-288312:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-288312",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/69d93e9ba3bcd8fade52ca8559de9dcdeeea6941579081c0c4663505db70733f-init/diff:/var/lib/docker/overlay2/ec0afb666e8237335e438a7adc5cdc83345e3266b08ae54bf0b7ce8a2781370a/diff",
	                "MergedDir": "/var/lib/docker/overlay2/69d93e9ba3bcd8fade52ca8559de9dcdeeea6941579081c0c4663505db70733f/merged",
	                "UpperDir": "/var/lib/docker/overlay2/69d93e9ba3bcd8fade52ca8559de9dcdeeea6941579081c0c4663505db70733f/diff",
	                "WorkDir": "/var/lib/docker/overlay2/69d93e9ba3bcd8fade52ca8559de9dcdeeea6941579081c0c4663505db70733f/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-288312",
	                "Source": "/var/lib/docker/volumes/addons-288312/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-288312",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-288312",
	                "name.minikube.sigs.k8s.io": "addons-288312",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "fc8f54523c8f6b3bb15d7f778272487ab936893d1f0b1910de83dd7245eec6f1",
	            "SandboxKey": "/var/run/docker/netns/fc8f54523c8f",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33138"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33139"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33142"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33140"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33141"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-288312": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null,
	                    "NetworkID": "68d2519e4c4aa68162e3cf1394df49e38d5d2d47ac98fed65a9c02feac7efef1",
	                    "EndpointID": "bba09e6322a03bfb49c51265bd3d4305ff6f20944d2bc4b16d7d519b22fa6dde",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-288312",
	                        "c0cd3d6c3fe0"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-288312 -n addons-288312
helpers_test.go:244: <<< TestAddons/serial/Volcano FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/serial/Volcano]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p addons-288312 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p addons-288312 logs -n 25: (1.531340654s)
helpers_test.go:252: TestAddons/serial/Volcano logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                 |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only              | download-only-475037   | jenkins | v1.33.1 | 19 Aug 24 11:31 UTC |                     |
	|         | -p download-only-475037              |                        |         |         |                     |                     |
	|         | --force --alsologtostderr            |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0         |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	| delete  | --all                                | minikube               | jenkins | v1.33.1 | 19 Aug 24 11:31 UTC | 19 Aug 24 11:31 UTC |
	| delete  | -p download-only-475037              | download-only-475037   | jenkins | v1.33.1 | 19 Aug 24 11:31 UTC | 19 Aug 24 11:31 UTC |
	| start   | -o=json --download-only              | download-only-985567   | jenkins | v1.33.1 | 19 Aug 24 11:31 UTC |                     |
	|         | -p download-only-985567              |                        |         |         |                     |                     |
	|         | --force --alsologtostderr            |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0         |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	| delete  | --all                                | minikube               | jenkins | v1.33.1 | 19 Aug 24 11:31 UTC | 19 Aug 24 11:31 UTC |
	| delete  | -p download-only-985567              | download-only-985567   | jenkins | v1.33.1 | 19 Aug 24 11:31 UTC | 19 Aug 24 11:31 UTC |
	| delete  | -p download-only-475037              | download-only-475037   | jenkins | v1.33.1 | 19 Aug 24 11:31 UTC | 19 Aug 24 11:31 UTC |
	| delete  | -p download-only-985567              | download-only-985567   | jenkins | v1.33.1 | 19 Aug 24 11:31 UTC | 19 Aug 24 11:31 UTC |
	| start   | --download-only -p                   | download-docker-406447 | jenkins | v1.33.1 | 19 Aug 24 11:31 UTC |                     |
	|         | download-docker-406447               |                        |         |         |                     |                     |
	|         | --alsologtostderr                    |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	| delete  | -p download-docker-406447            | download-docker-406447 | jenkins | v1.33.1 | 19 Aug 24 11:31 UTC | 19 Aug 24 11:31 UTC |
	| start   | --download-only -p                   | binary-mirror-274914   | jenkins | v1.33.1 | 19 Aug 24 11:31 UTC |                     |
	|         | binary-mirror-274914                 |                        |         |         |                     |                     |
	|         | --alsologtostderr                    |                        |         |         |                     |                     |
	|         | --binary-mirror                      |                        |         |         |                     |                     |
	|         | http://127.0.0.1:39509               |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	| delete  | -p binary-mirror-274914              | binary-mirror-274914   | jenkins | v1.33.1 | 19 Aug 24 11:31 UTC | 19 Aug 24 11:31 UTC |
	| addons  | enable dashboard -p                  | addons-288312          | jenkins | v1.33.1 | 19 Aug 24 11:31 UTC |                     |
	|         | addons-288312                        |                        |         |         |                     |                     |
	| addons  | disable dashboard -p                 | addons-288312          | jenkins | v1.33.1 | 19 Aug 24 11:31 UTC |                     |
	|         | addons-288312                        |                        |         |         |                     |                     |
	| start   | -p addons-288312 --wait=true         | addons-288312          | jenkins | v1.33.1 | 19 Aug 24 11:31 UTC | 19 Aug 24 11:35 UTC |
	|         | --memory=4000 --alsologtostderr      |                        |         |         |                     |                     |
	|         | --addons=registry                    |                        |         |         |                     |                     |
	|         | --addons=metrics-server              |                        |         |         |                     |                     |
	|         | --addons=volumesnapshots             |                        |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver         |                        |         |         |                     |                     |
	|         | --addons=gcp-auth                    |                        |         |         |                     |                     |
	|         | --addons=cloud-spanner               |                        |         |         |                     |                     |
	|         | --addons=inspektor-gadget            |                        |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher |                        |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin        |                        |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano       |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	|         | --addons=ingress                     |                        |         |         |                     |                     |
	|         | --addons=ingress-dns                 |                        |         |         |                     |                     |
	|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/19 11:31:29
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.22.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0819 11:31:29.747261  299967 out.go:345] Setting OutFile to fd 1 ...
	I0819 11:31:29.747463  299967 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 11:31:29.747493  299967 out.go:358] Setting ErrFile to fd 2...
	I0819 11:31:29.747512  299967 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 11:31:29.747764  299967 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19476-293809/.minikube/bin
	I0819 11:31:29.748251  299967 out.go:352] Setting JSON to false
	I0819 11:31:29.749229  299967 start.go:129] hostinfo: {"hostname":"ip-172-31-30-239","uptime":4437,"bootTime":1724062653,"procs":164,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I0819 11:31:29.749327  299967 start.go:139] virtualization:  
	I0819 11:31:29.751614  299967 out.go:177] * [addons-288312] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	I0819 11:31:29.753745  299967 out.go:177]   - MINIKUBE_LOCATION=19476
	I0819 11:31:29.753821  299967 notify.go:220] Checking for updates...
	I0819 11:31:29.757021  299967 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0819 11:31:29.758936  299967 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19476-293809/kubeconfig
	I0819 11:31:29.760586  299967 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19476-293809/.minikube
	I0819 11:31:29.762722  299967 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0819 11:31:29.764725  299967 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0819 11:31:29.766721  299967 driver.go:392] Setting default libvirt URI to qemu:///system
	I0819 11:31:29.789176  299967 docker.go:123] docker version: linux-27.1.2:Docker Engine - Community
	I0819 11:31:29.789294  299967 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0819 11:31:29.853105  299967 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:44 SystemTime:2024-08-19 11:31:29.843581446 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1067-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214900736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:27.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8fc6bcff51318944179630522a095cc9dbf9f353 Expected:8fc6bcff51318944179630522a095cc9dbf9f353} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1]] Warnings:<nil>}}
	I0819 11:31:29.853248  299967 docker.go:307] overlay module found
	I0819 11:31:29.855274  299967 out.go:177] * Using the docker driver based on user configuration
	I0819 11:31:29.857147  299967 start.go:297] selected driver: docker
	I0819 11:31:29.857168  299967 start.go:901] validating driver "docker" against <nil>
	I0819 11:31:29.857184  299967 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0819 11:31:29.857835  299967 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0819 11:31:29.908022  299967 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:44 SystemTime:2024-08-19 11:31:29.899296787 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1067-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214900736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:27.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8fc6bcff51318944179630522a095cc9dbf9f353 Expected:8fc6bcff51318944179630522a095cc9dbf9f353} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1]] Warnings:<nil>}}
	I0819 11:31:29.908196  299967 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0819 11:31:29.908428  299967 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0819 11:31:29.910354  299967 out.go:177] * Using Docker driver with root privileges
	I0819 11:31:29.912020  299967 cni.go:84] Creating CNI manager for ""
	I0819 11:31:29.912049  299967 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0819 11:31:29.912061  299967 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0819 11:31:29.912155  299967 start.go:340] cluster config:
	{Name:addons-288312 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:addons-288312 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHA
uthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 11:31:29.915338  299967 out.go:177] * Starting "addons-288312" primary control-plane node in "addons-288312" cluster
	I0819 11:31:29.917520  299967 cache.go:121] Beginning downloading kic base image for docker with containerd
	I0819 11:31:29.919335  299967 out.go:177] * Pulling base image v0.0.44-1723740748-19452 ...
	I0819 11:31:29.921077  299967 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime containerd
	I0819 11:31:29.921128  299967 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19476-293809/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-containerd-overlay2-arm64.tar.lz4
	I0819 11:31:29.921141  299967 cache.go:56] Caching tarball of preloaded images
	I0819 11:31:29.921193  299967 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d in local docker daemon
	I0819 11:31:29.921220  299967 preload.go:172] Found /home/jenkins/minikube-integration/19476-293809/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I0819 11:31:29.921230  299967 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on containerd
	I0819 11:31:29.921581  299967 profile.go:143] Saving config to /home/jenkins/minikube-integration/19476-293809/.minikube/profiles/addons-288312/config.json ...
	I0819 11:31:29.921652  299967 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19476-293809/.minikube/profiles/addons-288312/config.json: {Name:mk594487846d640d21442bc43df1e0ac7916e8f0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 11:31:29.935839  299967 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d to local cache
	I0819 11:31:29.935953  299967 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d in local cache directory
	I0819 11:31:29.935977  299967 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d in local cache directory, skipping pull
	I0819 11:31:29.935986  299967 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d exists in cache, skipping pull
	I0819 11:31:29.935995  299967 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d as a tarball
	I0819 11:31:29.936004  299967 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d from local cache
	I0819 11:31:46.790463  299967 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d from cached tarball
	I0819 11:31:46.790501  299967 cache.go:194] Successfully downloaded all kic artifacts
	I0819 11:31:46.790546  299967 start.go:360] acquireMachinesLock for addons-288312: {Name:mkee90396301ea6e222c07d15bd3fe49c01be471 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 11:31:46.790670  299967 start.go:364] duration metric: took 99.475µs to acquireMachinesLock for "addons-288312"
	I0819 11:31:46.790700  299967 start.go:93] Provisioning new machine with config: &{Name:addons-288312 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:addons-288312 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:fa
lse CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0819 11:31:46.790865  299967 start.go:125] createHost starting for "" (driver="docker")
	I0819 11:31:46.792986  299967 out.go:235] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I0819 11:31:46.793232  299967 start.go:159] libmachine.API.Create for "addons-288312" (driver="docker")
	I0819 11:31:46.793263  299967 client.go:168] LocalClient.Create starting
	I0819 11:31:46.793369  299967 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19476-293809/.minikube/certs/ca.pem
	I0819 11:31:47.058497  299967 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19476-293809/.minikube/certs/cert.pem
	I0819 11:31:47.514467  299967 cli_runner.go:164] Run: docker network inspect addons-288312 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0819 11:31:47.529356  299967 cli_runner.go:211] docker network inspect addons-288312 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0819 11:31:47.529449  299967 network_create.go:284] running [docker network inspect addons-288312] to gather additional debugging logs...
	I0819 11:31:47.529471  299967 cli_runner.go:164] Run: docker network inspect addons-288312
	W0819 11:31:47.543971  299967 cli_runner.go:211] docker network inspect addons-288312 returned with exit code 1
	I0819 11:31:47.544003  299967 network_create.go:287] error running [docker network inspect addons-288312]: docker network inspect addons-288312: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-288312 not found
	I0819 11:31:47.544022  299967 network_create.go:289] output of [docker network inspect addons-288312]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-288312 not found
	
	** /stderr **
	I0819 11:31:47.544141  299967 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0819 11:31:47.559218  299967 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40017dbe60}
	I0819 11:31:47.559265  299967 network_create.go:124] attempt to create docker network addons-288312 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0819 11:31:47.559321  299967 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-288312 addons-288312
	I0819 11:31:47.627857  299967 network_create.go:108] docker network addons-288312 192.168.49.0/24 created
	I0819 11:31:47.627889  299967 kic.go:121] calculated static IP "192.168.49.2" for the "addons-288312" container
	I0819 11:31:47.627961  299967 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0819 11:31:47.646056  299967 cli_runner.go:164] Run: docker volume create addons-288312 --label name.minikube.sigs.k8s.io=addons-288312 --label created_by.minikube.sigs.k8s.io=true
	I0819 11:31:47.661517  299967 oci.go:103] Successfully created a docker volume addons-288312
	I0819 11:31:47.661634  299967 cli_runner.go:164] Run: docker run --rm --name addons-288312-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-288312 --entrypoint /usr/bin/test -v addons-288312:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d -d /var/lib
	I0819 11:31:49.577514  299967 cli_runner.go:217] Completed: docker run --rm --name addons-288312-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-288312 --entrypoint /usr/bin/test -v addons-288312:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d -d /var/lib: (1.915839996s)
	I0819 11:31:49.577545  299967 oci.go:107] Successfully prepared a docker volume addons-288312
	I0819 11:31:49.577573  299967 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime containerd
	I0819 11:31:49.577593  299967 kic.go:194] Starting extracting preloaded images to volume ...
	I0819 11:31:49.577696  299967 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19476-293809/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v addons-288312:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d -I lz4 -xf /preloaded.tar -C /extractDir
	I0819 11:31:53.716910  299967 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19476-293809/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v addons-288312:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d -I lz4 -xf /preloaded.tar -C /extractDir: (4.139163798s)
	I0819 11:31:53.716944  299967 kic.go:203] duration metric: took 4.139347561s to extract preloaded images to volume ...
	W0819 11:31:53.717087  299967 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0819 11:31:53.717219  299967 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0819 11:31:53.766524  299967 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-288312 --name addons-288312 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-288312 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-288312 --network addons-288312 --ip 192.168.49.2 --volume addons-288312:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d
	I0819 11:31:54.111531  299967 cli_runner.go:164] Run: docker container inspect addons-288312 --format={{.State.Running}}
	I0819 11:31:54.131509  299967 cli_runner.go:164] Run: docker container inspect addons-288312 --format={{.State.Status}}
	I0819 11:31:54.148996  299967 cli_runner.go:164] Run: docker exec addons-288312 stat /var/lib/dpkg/alternatives/iptables
	I0819 11:31:54.223870  299967 oci.go:144] the created container "addons-288312" has a running status.
	I0819 11:31:54.223902  299967 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/19476-293809/.minikube/machines/addons-288312/id_rsa...
	I0819 11:31:54.413226  299967 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/19476-293809/.minikube/machines/addons-288312/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0819 11:31:54.440340  299967 cli_runner.go:164] Run: docker container inspect addons-288312 --format={{.State.Status}}
	I0819 11:31:54.457204  299967 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0819 11:31:54.457230  299967 kic_runner.go:114] Args: [docker exec --privileged addons-288312 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0819 11:31:54.508621  299967 cli_runner.go:164] Run: docker container inspect addons-288312 --format={{.State.Status}}
	I0819 11:31:54.533230  299967 machine.go:93] provisionDockerMachine start ...
	I0819 11:31:54.533329  299967 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-288312
	I0819 11:31:54.560371  299967 main.go:141] libmachine: Using SSH client type: native
	I0819 11:31:54.560648  299967 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e49d0] 0x3e7230 <nil>  [] 0s} 127.0.0.1 33138 <nil> <nil>}
	I0819 11:31:54.560658  299967 main.go:141] libmachine: About to run SSH command:
	hostname
	I0819 11:31:54.561440  299967 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:35492->127.0.0.1:33138: read: connection reset by peer
	I0819 11:31:57.690262  299967 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-288312
	
	I0819 11:31:57.690291  299967 ubuntu.go:169] provisioning hostname "addons-288312"
	I0819 11:31:57.690355  299967 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-288312
	I0819 11:31:57.707322  299967 main.go:141] libmachine: Using SSH client type: native
	I0819 11:31:57.707576  299967 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e49d0] 0x3e7230 <nil>  [] 0s} 127.0.0.1 33138 <nil> <nil>}
	I0819 11:31:57.707593  299967 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-288312 && echo "addons-288312" | sudo tee /etc/hostname
	I0819 11:31:57.846769  299967 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-288312
	
	I0819 11:31:57.846913  299967 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-288312
	I0819 11:31:57.871933  299967 main.go:141] libmachine: Using SSH client type: native
	I0819 11:31:57.872173  299967 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e49d0] 0x3e7230 <nil>  [] 0s} 127.0.0.1 33138 <nil> <nil>}
	I0819 11:31:57.872196  299967 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-288312' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-288312/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-288312' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0819 11:31:58.003160  299967 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0819 11:31:58.003192  299967 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19476-293809/.minikube CaCertPath:/home/jenkins/minikube-integration/19476-293809/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19476-293809/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19476-293809/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19476-293809/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19476-293809/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19476-293809/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19476-293809/.minikube}
	I0819 11:31:58.003217  299967 ubuntu.go:177] setting up certificates
	I0819 11:31:58.003227  299967 provision.go:84] configureAuth start
	I0819 11:31:58.003288  299967 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-288312
	I0819 11:31:58.020297  299967 provision.go:143] copyHostCerts
	I0819 11:31:58.020394  299967 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19476-293809/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19476-293809/.minikube/ca.pem (1082 bytes)
	I0819 11:31:58.020536  299967 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19476-293809/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19476-293809/.minikube/cert.pem (1123 bytes)
	I0819 11:31:58.020611  299967 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19476-293809/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19476-293809/.minikube/key.pem (1675 bytes)
	I0819 11:31:58.020713  299967 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19476-293809/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19476-293809/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19476-293809/.minikube/certs/ca-key.pem org=jenkins.addons-288312 san=[127.0.0.1 192.168.49.2 addons-288312 localhost minikube]
	I0819 11:31:58.762237  299967 provision.go:177] copyRemoteCerts
	I0819 11:31:58.762313  299967 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0819 11:31:58.762358  299967 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-288312
	I0819 11:31:58.778679  299967 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19476-293809/.minikube/machines/addons-288312/id_rsa Username:docker}
	I0819 11:31:58.871907  299967 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19476-293809/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0819 11:31:58.897487  299967 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19476-293809/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0819 11:31:58.922133  299967 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19476-293809/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0819 11:31:58.946454  299967 provision.go:87] duration metric: took 943.211947ms to configureAuth
	I0819 11:31:58.946483  299967 ubuntu.go:193] setting minikube options for container-runtime
	I0819 11:31:58.946706  299967 config.go:182] Loaded profile config "addons-288312": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.0
	I0819 11:31:58.946721  299967 machine.go:96] duration metric: took 4.413468494s to provisionDockerMachine
	I0819 11:31:58.946729  299967 client.go:171] duration metric: took 12.153460173s to LocalClient.Create
	I0819 11:31:58.946756  299967 start.go:167] duration metric: took 12.153525376s to libmachine.API.Create "addons-288312"
	I0819 11:31:58.946769  299967 start.go:293] postStartSetup for "addons-288312" (driver="docker")
	I0819 11:31:58.946780  299967 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0819 11:31:58.946844  299967 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0819 11:31:58.946907  299967 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-288312
	I0819 11:31:58.963438  299967 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19476-293809/.minikube/machines/addons-288312/id_rsa Username:docker}
	I0819 11:31:59.056044  299967 ssh_runner.go:195] Run: cat /etc/os-release
	I0819 11:31:59.059327  299967 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0819 11:31:59.059364  299967 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0819 11:31:59.059374  299967 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0819 11:31:59.059381  299967 info.go:137] Remote host: Ubuntu 22.04.4 LTS
	I0819 11:31:59.059392  299967 filesync.go:126] Scanning /home/jenkins/minikube-integration/19476-293809/.minikube/addons for local assets ...
	I0819 11:31:59.059473  299967 filesync.go:126] Scanning /home/jenkins/minikube-integration/19476-293809/.minikube/files for local assets ...
	I0819 11:31:59.059494  299967 start.go:296] duration metric: took 112.718314ms for postStartSetup
	I0819 11:31:59.059803  299967 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-288312
	I0819 11:31:59.076176  299967 profile.go:143] Saving config to /home/jenkins/minikube-integration/19476-293809/.minikube/profiles/addons-288312/config.json ...
	I0819 11:31:59.076467  299967 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0819 11:31:59.076527  299967 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-288312
	I0819 11:31:59.092537  299967 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19476-293809/.minikube/machines/addons-288312/id_rsa Username:docker}
	I0819 11:31:59.184395  299967 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0819 11:31:59.189044  299967 start.go:128] duration metric: took 12.398161927s to createHost
	I0819 11:31:59.189067  299967 start.go:83] releasing machines lock for "addons-288312", held for 12.398384531s
	I0819 11:31:59.189139  299967 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-288312
	I0819 11:31:59.205810  299967 ssh_runner.go:195] Run: cat /version.json
	I0819 11:31:59.205879  299967 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-288312
	I0819 11:31:59.206140  299967 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0819 11:31:59.206211  299967 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-288312
	I0819 11:31:59.230361  299967 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19476-293809/.minikube/machines/addons-288312/id_rsa Username:docker}
	I0819 11:31:59.242660  299967 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19476-293809/.minikube/machines/addons-288312/id_rsa Username:docker}
	I0819 11:31:59.447897  299967 ssh_runner.go:195] Run: systemctl --version
	I0819 11:31:59.452260  299967 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0819 11:31:59.456329  299967 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0819 11:31:59.480843  299967 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0819 11:31:59.480963  299967 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0819 11:31:59.509638  299967 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0819 11:31:59.509664  299967 start.go:495] detecting cgroup driver to use...
	I0819 11:31:59.509697  299967 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0819 11:31:59.509754  299967 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0819 11:31:59.522587  299967 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0819 11:31:59.534617  299967 docker.go:217] disabling cri-docker service (if available) ...
	I0819 11:31:59.534735  299967 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0819 11:31:59.548970  299967 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0819 11:31:59.563608  299967 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0819 11:31:59.643802  299967 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0819 11:31:59.739369  299967 docker.go:233] disabling docker service ...
	I0819 11:31:59.739507  299967 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0819 11:31:59.760306  299967 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0819 11:31:59.772398  299967 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0819 11:31:59.853593  299967 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0819 11:31:59.943794  299967 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0819 11:31:59.955545  299967 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0819 11:31:59.971864  299967 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0819 11:31:59.982660  299967 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0819 11:31:59.993364  299967 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0819 11:31:59.993456  299967 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0819 11:32:00.004181  299967 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0819 11:32:00.014007  299967 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0819 11:32:00.038150  299967 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0819 11:32:00.091660  299967 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0819 11:32:00.118308  299967 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0819 11:32:00.145801  299967 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0819 11:32:00.203601  299967 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0819 11:32:00.237504  299967 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0819 11:32:00.273077  299967 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0819 11:32:00.302873  299967 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 11:32:00.476237  299967 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0819 11:32:00.677850  299967 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I0819 11:32:00.678014  299967 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0819 11:32:00.683249  299967 start.go:563] Will wait 60s for crictl version
	I0819 11:32:00.683399  299967 ssh_runner.go:195] Run: which crictl
	I0819 11:32:00.687696  299967 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0819 11:32:00.736362  299967 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.7.20
	RuntimeApiVersion:  v1
	I0819 11:32:00.736528  299967 ssh_runner.go:195] Run: containerd --version
	I0819 11:32:00.759754  299967 ssh_runner.go:195] Run: containerd --version
	I0819 11:32:00.783851  299967 out.go:177] * Preparing Kubernetes v1.31.0 on containerd 1.7.20 ...
	I0819 11:32:00.785755  299967 cli_runner.go:164] Run: docker network inspect addons-288312 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0819 11:32:00.802004  299967 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0819 11:32:00.805787  299967 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0819 11:32:00.816868  299967 kubeadm.go:883] updating cluster {Name:addons-288312 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:addons-288312 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Cus
tomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0819 11:32:00.817003  299967 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime containerd
	I0819 11:32:00.817064  299967 ssh_runner.go:195] Run: sudo crictl images --output json
	I0819 11:32:00.854920  299967 containerd.go:627] all images are preloaded for containerd runtime.
	I0819 11:32:00.854941  299967 containerd.go:534] Images already preloaded, skipping extraction
	I0819 11:32:00.855004  299967 ssh_runner.go:195] Run: sudo crictl images --output json
	I0819 11:32:00.890782  299967 containerd.go:627] all images are preloaded for containerd runtime.
	I0819 11:32:00.890804  299967 cache_images.go:84] Images are preloaded, skipping loading
	I0819 11:32:00.890812  299967 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.31.0 containerd true true} ...
	I0819 11:32:00.890973  299967 kubeadm.go:946] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-288312 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:addons-288312 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0819 11:32:00.891046  299967 ssh_runner.go:195] Run: sudo crictl info
	I0819 11:32:00.929015  299967 cni.go:84] Creating CNI manager for ""
	I0819 11:32:00.929043  299967 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0819 11:32:00.929055  299967 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0819 11:32:00.929100  299967 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-288312 NodeName:addons-288312 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0819 11:32:00.929278  299967 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "addons-288312"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0819 11:32:00.929361  299967 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0819 11:32:00.938673  299967 binaries.go:44] Found k8s binaries, skipping transfer
	I0819 11:32:00.938743  299967 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0819 11:32:00.947647  299967 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0819 11:32:00.966586  299967 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0819 11:32:00.985304  299967 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2167 bytes)
	I0819 11:32:01.003528  299967 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0819 11:32:01.007037  299967 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0819 11:32:01.018102  299967 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 11:32:01.098704  299967 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0819 11:32:01.119756  299967 certs.go:68] Setting up /home/jenkins/minikube-integration/19476-293809/.minikube/profiles/addons-288312 for IP: 192.168.49.2
	I0819 11:32:01.119821  299967 certs.go:194] generating shared ca certs ...
	I0819 11:32:01.119853  299967 certs.go:226] acquiring lock for ca certs: {Name:mkf168e715338554e93ce93584b85aca19a124a7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 11:32:01.120016  299967 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19476-293809/.minikube/ca.key
	I0819 11:32:01.713094  299967 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19476-293809/.minikube/ca.crt ...
	I0819 11:32:01.713130  299967 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19476-293809/.minikube/ca.crt: {Name:mkee85d4dd874915ccbbc728f53ccd085abaceff Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 11:32:01.713907  299967 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19476-293809/.minikube/ca.key ...
	I0819 11:32:01.713943  299967 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19476-293809/.minikube/ca.key: {Name:mk1db8d2f0425a31b331357287599168300118f9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 11:32:01.714050  299967 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19476-293809/.minikube/proxy-client-ca.key
	I0819 11:32:02.145106  299967 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19476-293809/.minikube/proxy-client-ca.crt ...
	I0819 11:32:02.145139  299967 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19476-293809/.minikube/proxy-client-ca.crt: {Name:mkf7bfe34c80294c9a925768112b7a7a5936ae31 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 11:32:02.145338  299967 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19476-293809/.minikube/proxy-client-ca.key ...
	I0819 11:32:02.145352  299967 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19476-293809/.minikube/proxy-client-ca.key: {Name:mk2ea63cd66c6e7dfe6fa8f9f789ff441cf989aa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 11:32:02.145437  299967 certs.go:256] generating profile certs ...
	I0819 11:32:02.145507  299967 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19476-293809/.minikube/profiles/addons-288312/client.key
	I0819 11:32:02.145524  299967 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19476-293809/.minikube/profiles/addons-288312/client.crt with IP's: []
	I0819 11:32:02.785219  299967 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19476-293809/.minikube/profiles/addons-288312/client.crt ...
	I0819 11:32:02.785266  299967 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19476-293809/.minikube/profiles/addons-288312/client.crt: {Name:mkaf4181dc2d9e3c8b11219b7fa6dcaceabd05ee Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 11:32:02.785456  299967 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19476-293809/.minikube/profiles/addons-288312/client.key ...
	I0819 11:32:02.785469  299967 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19476-293809/.minikube/profiles/addons-288312/client.key: {Name:mk41939fbb427220bcbef75e0d8c386ee50e18aa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 11:32:02.785559  299967 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19476-293809/.minikube/profiles/addons-288312/apiserver.key.903c735b
	I0819 11:32:02.785580  299967 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19476-293809/.minikube/profiles/addons-288312/apiserver.crt.903c735b with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I0819 11:32:03.129808  299967 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19476-293809/.minikube/profiles/addons-288312/apiserver.crt.903c735b ...
	I0819 11:32:03.129843  299967 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19476-293809/.minikube/profiles/addons-288312/apiserver.crt.903c735b: {Name:mk489d35f4e7309f816f2ff9e3fec07d195a3e04 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 11:32:03.130027  299967 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19476-293809/.minikube/profiles/addons-288312/apiserver.key.903c735b ...
	I0819 11:32:03.130042  299967 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19476-293809/.minikube/profiles/addons-288312/apiserver.key.903c735b: {Name:mkdf792e8a92904b1ed3ba9b4540351a911ab8e7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 11:32:03.130129  299967 certs.go:381] copying /home/jenkins/minikube-integration/19476-293809/.minikube/profiles/addons-288312/apiserver.crt.903c735b -> /home/jenkins/minikube-integration/19476-293809/.minikube/profiles/addons-288312/apiserver.crt
	I0819 11:32:03.130218  299967 certs.go:385] copying /home/jenkins/minikube-integration/19476-293809/.minikube/profiles/addons-288312/apiserver.key.903c735b -> /home/jenkins/minikube-integration/19476-293809/.minikube/profiles/addons-288312/apiserver.key
	I0819 11:32:03.130274  299967 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19476-293809/.minikube/profiles/addons-288312/proxy-client.key
	I0819 11:32:03.130296  299967 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19476-293809/.minikube/profiles/addons-288312/proxy-client.crt with IP's: []
	I0819 11:32:03.945402  299967 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19476-293809/.minikube/profiles/addons-288312/proxy-client.crt ...
	I0819 11:32:03.945469  299967 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19476-293809/.minikube/profiles/addons-288312/proxy-client.crt: {Name:mk29ab018411923e6ef35a475256ec7dfb946307 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 11:32:03.945662  299967 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19476-293809/.minikube/profiles/addons-288312/proxy-client.key ...
	I0819 11:32:03.945678  299967 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19476-293809/.minikube/profiles/addons-288312/proxy-client.key: {Name:mk7b44771967ddfef319c7946a187c69fd68e847 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 11:32:03.945871  299967 certs.go:484] found cert: /home/jenkins/minikube-integration/19476-293809/.minikube/certs/ca-key.pem (1675 bytes)
	I0819 11:32:03.945914  299967 certs.go:484] found cert: /home/jenkins/minikube-integration/19476-293809/.minikube/certs/ca.pem (1082 bytes)
	I0819 11:32:03.945944  299967 certs.go:484] found cert: /home/jenkins/minikube-integration/19476-293809/.minikube/certs/cert.pem (1123 bytes)
	I0819 11:32:03.945978  299967 certs.go:484] found cert: /home/jenkins/minikube-integration/19476-293809/.minikube/certs/key.pem (1675 bytes)
	I0819 11:32:03.946553  299967 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19476-293809/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0819 11:32:03.975178  299967 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19476-293809/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0819 11:32:04.001859  299967 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19476-293809/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0819 11:32:04.030226  299967 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19476-293809/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0819 11:32:04.057356  299967 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19476-293809/.minikube/profiles/addons-288312/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0819 11:32:04.083668  299967 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19476-293809/.minikube/profiles/addons-288312/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0819 11:32:04.108821  299967 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19476-293809/.minikube/profiles/addons-288312/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0819 11:32:04.134365  299967 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19476-293809/.minikube/profiles/addons-288312/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0819 11:32:04.159072  299967 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19476-293809/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0819 11:32:04.184060  299967 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0819 11:32:04.206299  299967 ssh_runner.go:195] Run: openssl version
	I0819 11:32:04.212159  299967 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0819 11:32:04.223027  299967 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0819 11:32:04.227258  299967 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 19 11:32 /usr/share/ca-certificates/minikubeCA.pem
	I0819 11:32:04.227369  299967 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0819 11:32:04.234764  299967 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0819 11:32:04.245229  299967 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0819 11:32:04.249518  299967 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0819 11:32:04.249615  299967 kubeadm.go:392] StartCluster: {Name:addons-288312 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:addons-288312 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Custom
QemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 11:32:04.249752  299967 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0819 11:32:04.249845  299967 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0819 11:32:04.289271  299967 cri.go:89] found id: ""
	I0819 11:32:04.289341  299967 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0819 11:32:04.298604  299967 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0819 11:32:04.308066  299967 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0819 11:32:04.308174  299967 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0819 11:32:04.317120  299967 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0819 11:32:04.317148  299967 kubeadm.go:157] found existing configuration files:
	
	I0819 11:32:04.317222  299967 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0819 11:32:04.325933  299967 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0819 11:32:04.326002  299967 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0819 11:32:04.334403  299967 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0819 11:32:04.343472  299967 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0819 11:32:04.343654  299967 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0819 11:32:04.352479  299967 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0819 11:32:04.361625  299967 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0819 11:32:04.361734  299967 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0819 11:32:04.370374  299967 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0819 11:32:04.379104  299967 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0819 11:32:04.379198  299967 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0819 11:32:04.387851  299967 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0819 11:32:04.431691  299967 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0
	I0819 11:32:04.431896  299967 kubeadm.go:310] [preflight] Running pre-flight checks
	I0819 11:32:04.450374  299967 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0819 11:32:04.450449  299967 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1067-aws
	I0819 11:32:04.450487  299967 kubeadm.go:310] OS: Linux
	I0819 11:32:04.450535  299967 kubeadm.go:310] CGROUPS_CPU: enabled
	I0819 11:32:04.450585  299967 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I0819 11:32:04.450634  299967 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0819 11:32:04.450688  299967 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0819 11:32:04.450738  299967 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0819 11:32:04.450788  299967 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0819 11:32:04.450835  299967 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0819 11:32:04.450904  299967 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0819 11:32:04.450952  299967 kubeadm.go:310] CGROUPS_BLKIO: enabled
	I0819 11:32:04.511008  299967 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0819 11:32:04.511119  299967 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0819 11:32:04.511211  299967 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0819 11:32:04.516678  299967 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0819 11:32:04.521509  299967 out.go:235]   - Generating certificates and keys ...
	I0819 11:32:04.521701  299967 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0819 11:32:04.521803  299967 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0819 11:32:05.039865  299967 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0819 11:32:05.392172  299967 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0819 11:32:06.046206  299967 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0819 11:32:06.272030  299967 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0819 11:32:06.598855  299967 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0819 11:32:06.599028  299967 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-288312 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0819 11:32:06.947221  299967 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0819 11:32:06.947586  299967 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-288312 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0819 11:32:07.561475  299967 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0819 11:32:07.997999  299967 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0819 11:32:08.545390  299967 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0819 11:32:08.545651  299967 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0819 11:32:08.769132  299967 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0819 11:32:09.451240  299967 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0819 11:32:10.210608  299967 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0819 11:32:10.773417  299967 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0819 11:32:10.962820  299967 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0819 11:32:10.963835  299967 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0819 11:32:10.967032  299967 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0819 11:32:10.969608  299967 out.go:235]   - Booting up control plane ...
	I0819 11:32:10.969729  299967 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0819 11:32:10.969834  299967 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0819 11:32:10.970984  299967 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0819 11:32:10.987467  299967 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0819 11:32:10.993966  299967 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0819 11:32:10.994040  299967 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0819 11:32:11.105174  299967 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0819 11:32:11.105307  299967 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0819 11:32:12.108267  299967 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.003199151s
	I0819 11:32:12.108626  299967 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0819 11:32:18.610029  299967 kubeadm.go:310] [api-check] The API server is healthy after 6.501320289s
	I0819 11:32:18.630870  299967 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0819 11:32:18.650362  299967 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0819 11:32:18.678583  299967 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0819 11:32:18.678828  299967 kubeadm.go:310] [mark-control-plane] Marking the node addons-288312 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0819 11:32:18.691007  299967 kubeadm.go:310] [bootstrap-token] Using token: 6wmh2v.03gr7cg5dl33ahki
	I0819 11:32:18.693805  299967 out.go:235]   - Configuring RBAC rules ...
	I0819 11:32:18.693928  299967 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0819 11:32:18.699180  299967 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0819 11:32:18.708790  299967 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0819 11:32:18.714938  299967 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0819 11:32:18.719048  299967 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0819 11:32:18.723067  299967 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0819 11:32:19.020312  299967 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0819 11:32:19.454239  299967 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0819 11:32:20.038354  299967 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0819 11:32:20.039878  299967 kubeadm.go:310] 
	I0819 11:32:20.039955  299967 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0819 11:32:20.039962  299967 kubeadm.go:310] 
	I0819 11:32:20.040036  299967 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0819 11:32:20.040042  299967 kubeadm.go:310] 
	I0819 11:32:20.040084  299967 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0819 11:32:20.040400  299967 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0819 11:32:20.040455  299967 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0819 11:32:20.040461  299967 kubeadm.go:310] 
	I0819 11:32:20.040514  299967 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0819 11:32:20.040520  299967 kubeadm.go:310] 
	I0819 11:32:20.040566  299967 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0819 11:32:20.040570  299967 kubeadm.go:310] 
	I0819 11:32:20.040621  299967 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0819 11:32:20.040693  299967 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0819 11:32:20.040759  299967 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0819 11:32:20.040764  299967 kubeadm.go:310] 
	I0819 11:32:20.041025  299967 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0819 11:32:20.041116  299967 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0819 11:32:20.041121  299967 kubeadm.go:310] 
	I0819 11:32:20.041449  299967 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 6wmh2v.03gr7cg5dl33ahki \
	I0819 11:32:20.041557  299967 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:3b98b9254b3c870e10598a9b538ffd10deb234c1ef9243a2e4aa0a1abb38bbbb \
	I0819 11:32:20.041579  299967 kubeadm.go:310] 	--control-plane 
	I0819 11:32:20.041584  299967 kubeadm.go:310] 
	I0819 11:32:20.041851  299967 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0819 11:32:20.041862  299967 kubeadm.go:310] 
	I0819 11:32:20.042126  299967 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 6wmh2v.03gr7cg5dl33ahki \
	I0819 11:32:20.042382  299967 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:3b98b9254b3c870e10598a9b538ffd10deb234c1ef9243a2e4aa0a1abb38bbbb 
	I0819 11:32:20.048283  299967 kubeadm.go:310] W0819 11:32:04.428621    1030 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0819 11:32:20.048616  299967 kubeadm.go:310] W0819 11:32:04.429390    1030 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0819 11:32:20.048851  299967 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1067-aws\n", err: exit status 1
	I0819 11:32:20.048958  299967 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0819 11:32:20.049025  299967 cni.go:84] Creating CNI manager for ""
	I0819 11:32:20.049053  299967 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0819 11:32:20.052038  299967 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0819 11:32:20.054872  299967 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0819 11:32:20.059763  299967 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.0/kubectl ...
	I0819 11:32:20.059806  299967 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0819 11:32:20.082975  299967 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0819 11:32:20.365766  299967 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0819 11:32:20.365906  299967 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 11:32:20.365952  299967 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-288312 minikube.k8s.io/updated_at=2024_08_19T11_32_20_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=7871dd89d2a8218fd3bbcc542b116f963c0d9934 minikube.k8s.io/name=addons-288312 minikube.k8s.io/primary=true
	I0819 11:32:20.521181  299967 ops.go:34] apiserver oom_adj: -16
	I0819 11:32:20.521271  299967 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 11:32:21.022168  299967 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 11:32:21.522240  299967 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 11:32:22.021575  299967 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 11:32:22.521450  299967 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 11:32:23.022677  299967 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 11:32:23.521425  299967 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 11:32:24.028695  299967 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 11:32:24.131361  299967 kubeadm.go:1113] duration metric: took 3.765510928s to wait for elevateKubeSystemPrivileges
	I0819 11:32:24.131400  299967 kubeadm.go:394] duration metric: took 19.88179015s to StartCluster
	I0819 11:32:24.131419  299967 settings.go:142] acquiring lock: {Name:mkc4435b6c8d62b9d001c06e85eb76d8e377373c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 11:32:24.131538  299967 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19476-293809/kubeconfig
	I0819 11:32:24.131913  299967 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19476-293809/kubeconfig: {Name:mk83cf1ee61353d940dd326434ad6e97ed986eab Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 11:32:24.132123  299967 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0819 11:32:24.132264  299967 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0819 11:32:24.132521  299967 config.go:182] Loaded profile config "addons-288312": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.0
	I0819 11:32:24.132559  299967 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0819 11:32:24.132654  299967 addons.go:69] Setting yakd=true in profile "addons-288312"
	I0819 11:32:24.132681  299967 addons.go:234] Setting addon yakd=true in "addons-288312"
	I0819 11:32:24.132710  299967 host.go:66] Checking if "addons-288312" exists ...
	I0819 11:32:24.133204  299967 cli_runner.go:164] Run: docker container inspect addons-288312 --format={{.State.Status}}
	I0819 11:32:24.133717  299967 addons.go:69] Setting cloud-spanner=true in profile "addons-288312"
	I0819 11:32:24.133730  299967 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-288312"
	I0819 11:32:24.133745  299967 addons.go:234] Setting addon cloud-spanner=true in "addons-288312"
	I0819 11:32:24.133754  299967 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-288312"
	I0819 11:32:24.133777  299967 host.go:66] Checking if "addons-288312" exists ...
	I0819 11:32:24.133780  299967 host.go:66] Checking if "addons-288312" exists ...
	I0819 11:32:24.134183  299967 cli_runner.go:164] Run: docker container inspect addons-288312 --format={{.State.Status}}
	I0819 11:32:24.134192  299967 cli_runner.go:164] Run: docker container inspect addons-288312 --format={{.State.Status}}
	I0819 11:32:24.136790  299967 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-288312"
	I0819 11:32:24.136872  299967 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-288312"
	I0819 11:32:24.136907  299967 host.go:66] Checking if "addons-288312" exists ...
	I0819 11:32:24.137386  299967 cli_runner.go:164] Run: docker container inspect addons-288312 --format={{.State.Status}}
	I0819 11:32:24.142139  299967 addons.go:69] Setting registry=true in profile "addons-288312"
	I0819 11:32:24.142272  299967 addons.go:234] Setting addon registry=true in "addons-288312"
	I0819 11:32:24.142320  299967 host.go:66] Checking if "addons-288312" exists ...
	I0819 11:32:24.142780  299967 cli_runner.go:164] Run: docker container inspect addons-288312 --format={{.State.Status}}
	I0819 11:32:24.144535  299967 addons.go:69] Setting default-storageclass=true in profile "addons-288312"
	I0819 11:32:24.144665  299967 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-288312"
	I0819 11:32:24.145134  299967 cli_runner.go:164] Run: docker container inspect addons-288312 --format={{.State.Status}}
	I0819 11:32:24.146639  299967 addons.go:69] Setting storage-provisioner=true in profile "addons-288312"
	I0819 11:32:24.146688  299967 addons.go:234] Setting addon storage-provisioner=true in "addons-288312"
	I0819 11:32:24.146723  299967 host.go:66] Checking if "addons-288312" exists ...
	I0819 11:32:24.147325  299967 cli_runner.go:164] Run: docker container inspect addons-288312 --format={{.State.Status}}
	I0819 11:32:24.161447  299967 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-288312"
	I0819 11:32:24.161497  299967 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-288312"
	I0819 11:32:24.161730  299967 addons.go:69] Setting volcano=true in profile "addons-288312"
	I0819 11:32:24.161752  299967 addons.go:234] Setting addon volcano=true in "addons-288312"
	I0819 11:32:24.161790  299967 host.go:66] Checking if "addons-288312" exists ...
	I0819 11:32:24.162228  299967 cli_runner.go:164] Run: docker container inspect addons-288312 --format={{.State.Status}}
	I0819 11:32:24.164227  299967 addons.go:69] Setting gcp-auth=true in profile "addons-288312"
	I0819 11:32:24.164275  299967 mustload.go:65] Loading cluster: addons-288312
	I0819 11:32:24.164842  299967 config.go:182] Loaded profile config "addons-288312": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.0
	I0819 11:32:24.165350  299967 cli_runner.go:164] Run: docker container inspect addons-288312 --format={{.State.Status}}
	I0819 11:32:24.178179  299967 addons.go:69] Setting ingress=true in profile "addons-288312"
	I0819 11:32:24.178216  299967 addons.go:234] Setting addon ingress=true in "addons-288312"
	I0819 11:32:24.178263  299967 host.go:66] Checking if "addons-288312" exists ...
	I0819 11:32:24.178734  299967 cli_runner.go:164] Run: docker container inspect addons-288312 --format={{.State.Status}}
	I0819 11:32:24.181963  299967 addons.go:69] Setting volumesnapshots=true in profile "addons-288312"
	I0819 11:32:24.182105  299967 addons.go:234] Setting addon volumesnapshots=true in "addons-288312"
	I0819 11:32:24.182168  299967 host.go:66] Checking if "addons-288312" exists ...
	I0819 11:32:24.192411  299967 out.go:177] * Verifying Kubernetes components...
	I0819 11:32:24.193661  299967 cli_runner.go:164] Run: docker container inspect addons-288312 --format={{.State.Status}}
	I0819 11:32:24.200922  299967 addons.go:69] Setting ingress-dns=true in profile "addons-288312"
	I0819 11:32:24.200973  299967 addons.go:234] Setting addon ingress-dns=true in "addons-288312"
	I0819 11:32:24.201021  299967 host.go:66] Checking if "addons-288312" exists ...
	I0819 11:32:24.201476  299967 cli_runner.go:164] Run: docker container inspect addons-288312 --format={{.State.Status}}
	I0819 11:32:24.217421  299967 addons.go:69] Setting inspektor-gadget=true in profile "addons-288312"
	I0819 11:32:24.217470  299967 addons.go:234] Setting addon inspektor-gadget=true in "addons-288312"
	I0819 11:32:24.217512  299967 host.go:66] Checking if "addons-288312" exists ...
	I0819 11:32:24.217969  299967 cli_runner.go:164] Run: docker container inspect addons-288312 --format={{.State.Status}}
	I0819 11:32:24.244519  299967 addons.go:69] Setting metrics-server=true in profile "addons-288312"
	I0819 11:32:24.244558  299967 addons.go:234] Setting addon metrics-server=true in "addons-288312"
	I0819 11:32:24.244596  299967 host.go:66] Checking if "addons-288312" exists ...
	I0819 11:32:24.245104  299967 cli_runner.go:164] Run: docker container inspect addons-288312 --format={{.State.Status}}
	I0819 11:32:24.258795  299967 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 11:32:24.265211  299967 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0819 11:32:24.275875  299967 cli_runner.go:164] Run: docker container inspect addons-288312 --format={{.State.Status}}
	I0819 11:32:24.291770  299967 host.go:66] Checking if "addons-288312" exists ...
	I0819 11:32:24.299962  299967 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.22
	I0819 11:32:24.301044  299967 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0819 11:32:24.302127  299967 addons.go:234] Setting addon default-storageclass=true in "addons-288312"
	I0819 11:32:24.302169  299967 host.go:66] Checking if "addons-288312" exists ...
	I0819 11:32:24.302584  299967 cli_runner.go:164] Run: docker container inspect addons-288312 --format={{.State.Status}}
	I0819 11:32:24.302961  299967 out.go:177]   - Using image docker.io/volcanosh/vc-scheduler:v1.9.0
	I0819 11:32:24.316951  299967 out.go:177]   - Using image docker.io/registry:2.8.3
	I0819 11:32:24.317040  299967 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0819 11:32:24.317490  299967 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	I0819 11:32:24.317527  299967 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0819 11:32:24.345932  299967 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0819 11:32:24.346128  299967 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-288312
	I0819 11:32:24.351885  299967 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0819 11:32:24.351961  299967 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0819 11:32:24.352055  299967 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-288312
	I0819 11:32:24.317634  299967 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0819 11:32:24.357181  299967 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0819 11:32:24.357284  299967 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-288312
	I0819 11:32:24.362544  299967 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0819 11:32:24.341377  299967 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0819 11:32:24.378042  299967 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0819 11:32:24.378152  299967 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-288312
	I0819 11:32:24.317646  299967 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0819 11:32:24.381704  299967 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0819 11:32:24.387421  299967 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0819 11:32:24.390984  299967 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0819 11:32:24.395715  299967 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0819 11:32:24.396435  299967 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0819 11:32:24.396449  299967 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0819 11:32:24.396514  299967 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-288312
	I0819 11:32:24.399212  299967 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-288312"
	I0819 11:32:24.399253  299967 host.go:66] Checking if "addons-288312" exists ...
	I0819 11:32:24.399655  299967 cli_runner.go:164] Run: docker container inspect addons-288312 --format={{.State.Status}}
	I0819 11:32:24.407718  299967 out.go:177]   - Using image docker.io/volcanosh/vc-webhook-manager:v1.9.0
	I0819 11:32:24.408184  299967 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0819 11:32:24.409501  299967 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.1
	I0819 11:32:24.412115  299967 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0819 11:32:24.412133  299967 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0819 11:32:24.412191  299967 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.31.0
	I0819 11:32:24.412773  299967 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0819 11:32:24.412978  299967 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0819 11:32:24.413344  299967 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-288312
	I0819 11:32:24.424695  299967 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0819 11:32:24.424717  299967 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0819 11:32:24.424777  299967 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-288312
	I0819 11:32:24.457525  299967 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0819 11:32:24.457549  299967 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0819 11:32:24.457636  299967 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-288312
	I0819 11:32:24.459059  299967 out.go:177]   - Using image docker.io/volcanosh/vc-controller-manager:v1.9.0
	I0819 11:32:24.459253  299967 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.2
	I0819 11:32:24.480702  299967 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0819 11:32:24.483078  299967 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0819 11:32:24.484461  299967 addons.go:431] installing /etc/kubernetes/addons/volcano-deployment.yaml
	I0819 11:32:24.484486  299967 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volcano-deployment.yaml (434001 bytes)
	I0819 11:32:24.484560  299967 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-288312
	I0819 11:32:24.491483  299967 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0819 11:32:24.491502  299967 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0819 11:32:24.491569  299967 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-288312
	I0819 11:32:24.511669  299967 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0819 11:32:24.511694  299967 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0819 11:32:24.511761  299967 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-288312
	I0819 11:32:24.512032  299967 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0819 11:32:24.514557  299967 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0819 11:32:24.514576  299967 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0819 11:32:24.514670  299967 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-288312
	I0819 11:32:24.531224  299967 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0819 11:32:24.531250  299967 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0819 11:32:24.531316  299967 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-288312
	I0819 11:32:24.570046  299967 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19476-293809/.minikube/machines/addons-288312/id_rsa Username:docker}
	I0819 11:32:24.571659  299967 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19476-293809/.minikube/machines/addons-288312/id_rsa Username:docker}
	I0819 11:32:24.571969  299967 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19476-293809/.minikube/machines/addons-288312/id_rsa Username:docker}
	I0819 11:32:24.572252  299967 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19476-293809/.minikube/machines/addons-288312/id_rsa Username:docker}
	I0819 11:32:24.594278  299967 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19476-293809/.minikube/machines/addons-288312/id_rsa Username:docker}
	I0819 11:32:24.630287  299967 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19476-293809/.minikube/machines/addons-288312/id_rsa Username:docker}
	I0819 11:32:24.646643  299967 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19476-293809/.minikube/machines/addons-288312/id_rsa Username:docker}
	I0819 11:32:24.647126  299967 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19476-293809/.minikube/machines/addons-288312/id_rsa Username:docker}
	I0819 11:32:24.666290  299967 out.go:177]   - Using image docker.io/busybox:stable
	I0819 11:32:24.669068  299967 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0819 11:32:24.672679  299967 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0819 11:32:24.672705  299967 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0819 11:32:24.672778  299967 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-288312
	I0819 11:32:24.689514  299967 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19476-293809/.minikube/machines/addons-288312/id_rsa Username:docker}
	I0819 11:32:24.690180  299967 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19476-293809/.minikube/machines/addons-288312/id_rsa Username:docker}
	I0819 11:32:24.724127  299967 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19476-293809/.minikube/machines/addons-288312/id_rsa Username:docker}
	I0819 11:32:24.733385  299967 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19476-293809/.minikube/machines/addons-288312/id_rsa Username:docker}
	I0819 11:32:24.746570  299967 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19476-293809/.minikube/machines/addons-288312/id_rsa Username:docker}
	I0819 11:32:24.746960  299967 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19476-293809/.minikube/machines/addons-288312/id_rsa Username:docker}
	W0819 11:32:24.759316  299967 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0819 11:32:24.759348  299967 retry.go:31] will retry after 254.121641ms: ssh: handshake failed: EOF
	I0819 11:32:24.797094  299967 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0819 11:32:24.797224  299967 ssh_runner.go:195] Run: sudo systemctl start kubelet
	W0819 11:32:25.014700  299967 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0819 11:32:25.014779  299967 retry.go:31] will retry after 279.612186ms: ssh: handshake failed: EOF
	I0819 11:32:25.204803  299967 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0819 11:32:25.233788  299967 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0819 11:32:25.347721  299967 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0819 11:32:25.398841  299967 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0819 11:32:25.444181  299967 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0819 11:32:25.444256  299967 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0819 11:32:25.447001  299967 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0819 11:32:25.450657  299967 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0819 11:32:25.460839  299967 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0819 11:32:25.460922  299967 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0819 11:32:25.471626  299967 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml
	I0819 11:32:25.487196  299967 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0819 11:32:25.487268  299967 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0819 11:32:25.502458  299967 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0819 11:32:25.502535  299967 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0819 11:32:25.525027  299967 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0819 11:32:25.525106  299967 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0819 11:32:25.551851  299967 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0819 11:32:25.599250  299967 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0819 11:32:25.599321  299967 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0819 11:32:25.745978  299967 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0819 11:32:25.746052  299967 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0819 11:32:25.781657  299967 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0819 11:32:25.781731  299967 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0819 11:32:25.812482  299967 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0819 11:32:25.812548  299967 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0819 11:32:25.892852  299967 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0819 11:32:25.892879  299967 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0819 11:32:26.003170  299967 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0819 11:32:26.003206  299967 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0819 11:32:26.009566  299967 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0819 11:32:26.009593  299967 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0819 11:32:26.090865  299967 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0819 11:32:26.098409  299967 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0819 11:32:26.098437  299967 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0819 11:32:26.175440  299967 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0819 11:32:26.175482  299967 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0819 11:32:26.202178  299967 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0819 11:32:26.202205  299967 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0819 11:32:26.217951  299967 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0819 11:32:26.331603  299967 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0819 11:32:26.331624  299967 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0819 11:32:26.410731  299967 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0819 11:32:26.410757  299967 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0819 11:32:26.579267  299967 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0819 11:32:26.579289  299967 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0819 11:32:26.742602  299967 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0819 11:32:26.742630  299967 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0819 11:32:26.760777  299967 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0819 11:32:27.048080  299967 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0819 11:32:27.048150  299967 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0819 11:32:27.152613  299967 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.355480625s)
	I0819 11:32:27.152737  299967 start.go:971] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0819 11:32:27.152699  299967 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (2.355460433s)
	I0819 11:32:27.153712  299967 node_ready.go:35] waiting up to 6m0s for node "addons-288312" to be "Ready" ...
	I0819 11:32:27.159663  299967 node_ready.go:49] node "addons-288312" has status "Ready":"True"
	I0819 11:32:27.159740  299967 node_ready.go:38] duration metric: took 5.970434ms for node "addons-288312" to be "Ready" ...
	I0819 11:32:27.159768  299967 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0819 11:32:27.173949  299967 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-n6g2b" in "kube-system" namespace to be "Ready" ...
	I0819 11:32:27.195303  299967 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0819 11:32:27.195326  299967 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0819 11:32:27.238496  299967 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0819 11:32:27.238596  299967 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0819 11:32:27.578122  299967 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0819 11:32:27.578191  299967 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0819 11:32:27.661229  299967 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-288312" context rescaled to 1 replicas
	I0819 11:32:27.679403  299967 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0819 11:32:27.679583  299967 pod_ready.go:98] error getting pod "coredns-6f6b679f8f-n6g2b" in "kube-system" namespace (skipping!): pods "coredns-6f6b679f8f-n6g2b" not found
	I0819 11:32:27.679614  299967 pod_ready.go:82] duration metric: took 505.568817ms for pod "coredns-6f6b679f8f-n6g2b" in "kube-system" namespace to be "Ready" ...
	E0819 11:32:27.679632  299967 pod_ready.go:67] WaitExtra: waitPodCondition: error getting pod "coredns-6f6b679f8f-n6g2b" in "kube-system" namespace (skipping!): pods "coredns-6f6b679f8f-n6g2b" not found
	I0819 11:32:27.679640  299967 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-rsl4l" in "kube-system" namespace to be "Ready" ...
	I0819 11:32:27.681970  299967 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (2.477080398s)
	I0819 11:32:27.856492  299967 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0819 11:32:27.856528  299967 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0819 11:32:27.959639  299967 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0819 11:32:27.959665  299967 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0819 11:32:28.123563  299967 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0819 11:32:28.123592  299967 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0819 11:32:28.201825  299967 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0819 11:32:28.365145  299967 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0819 11:32:28.365170  299967 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0819 11:32:28.825084  299967 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0819 11:32:28.825119  299967 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0819 11:32:28.851163  299967 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (3.617190875s)
	I0819 11:32:28.851253  299967 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (3.503466382s)
	I0819 11:32:28.851286  299967 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (3.452323827s)
	I0819 11:32:29.166932  299967 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0819 11:32:29.166955  299967 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0819 11:32:29.446058  299967 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0819 11:32:29.446083  299967 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0819 11:32:29.687981  299967 pod_ready.go:103] pod "coredns-6f6b679f8f-rsl4l" in "kube-system" namespace has status "Ready":"False"
	I0819 11:32:29.768789  299967 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0819 11:32:29.768856  299967 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0819 11:32:30.151189  299967 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0819 11:32:31.599480  299967 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0819 11:32:31.599574  299967 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-288312
	I0819 11:32:31.647243  299967 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19476-293809/.minikube/machines/addons-288312/id_rsa Username:docker}
	I0819 11:32:31.691877  299967 pod_ready.go:103] pod "coredns-6f6b679f8f-rsl4l" in "kube-system" namespace has status "Ready":"False"
	I0819 11:32:32.115825  299967 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0819 11:32:32.323025  299967 addons.go:234] Setting addon gcp-auth=true in "addons-288312"
	I0819 11:32:32.323127  299967 host.go:66] Checking if "addons-288312" exists ...
	I0819 11:32:32.323643  299967 cli_runner.go:164] Run: docker container inspect addons-288312 --format={{.State.Status}}
	I0819 11:32:32.349648  299967 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0819 11:32:32.349710  299967 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-288312
	I0819 11:32:32.383202  299967 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19476-293809/.minikube/machines/addons-288312/id_rsa Username:docker}
	I0819 11:32:32.796044  299967 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (7.345307107s)
	I0819 11:32:32.796122  299967 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (7.34903226s)
	I0819 11:32:32.796139  299967 addons.go:475] Verifying addon ingress=true in "addons-288312"
	I0819 11:32:32.798278  299967 out.go:177] * Verifying ingress addon...
	I0819 11:32:32.801241  299967 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0819 11:32:32.819851  299967 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0819 11:32:32.820171  299967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:32:33.306578  299967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:32:33.714828  299967 pod_ready.go:103] pod "coredns-6f6b679f8f-rsl4l" in "kube-system" namespace has status "Ready":"False"
	I0819 11:32:33.968585  299967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:32:34.376736  299967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:32:34.819721  299967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:32:35.084857  299967 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml: (9.613143158s)
	I0819 11:32:35.084982  299967 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (9.533063112s)
	I0819 11:32:35.085069  299967 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (8.994138721s)
	I0819 11:32:35.085103  299967 addons.go:475] Verifying addon registry=true in "addons-288312"
	I0819 11:32:35.085327  299967 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (8.867339304s)
	I0819 11:32:35.085443  299967 addons.go:475] Verifying addon metrics-server=true in "addons-288312"
	I0819 11:32:35.085518  299967 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (8.324703308s)
	I0819 11:32:35.085687  299967 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (7.406250106s)
	W0819 11:32:35.085726  299967 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0819 11:32:35.085748  299967 retry.go:31] will retry after 190.11589ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0819 11:32:35.085828  299967 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (6.883972788s)
	I0819 11:32:35.087424  299967 out.go:177] * Verifying registry addon...
	I0819 11:32:35.088916  299967 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-288312 service yakd-dashboard -n yakd-dashboard
	
	I0819 11:32:35.091597  299967 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0819 11:32:35.131233  299967 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0819 11:32:35.131263  299967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 11:32:35.276207  299967 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0819 11:32:35.403166  299967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:32:35.603633  299967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 11:32:35.813667  299967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:32:35.953612  299967 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (5.802366379s)
	I0819 11:32:35.953649  299967 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-288312"
	I0819 11:32:35.953711  299967 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (3.604031537s)
	I0819 11:32:35.955733  299967 out.go:177] * Verifying csi-hostpath-driver addon...
	I0819 11:32:35.955734  299967 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0819 11:32:35.958248  299967 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0819 11:32:35.960291  299967 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0819 11:32:35.962533  299967 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0819 11:32:35.962594  299967 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0819 11:32:35.968814  299967 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0819 11:32:35.968841  299967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:32:36.057671  299967 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0819 11:32:36.057700  299967 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0819 11:32:36.096506  299967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 11:32:36.159004  299967 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0819 11:32:36.159030  299967 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0819 11:32:36.187415  299967 pod_ready.go:103] pod "coredns-6f6b679f8f-rsl4l" in "kube-system" namespace has status "Ready":"False"
	I0819 11:32:36.234047  299967 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0819 11:32:36.307319  299967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:32:36.466023  299967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:32:36.595956  299967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 11:32:36.806255  299967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:32:36.939211  299967 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.662906905s)
	I0819 11:32:36.962854  299967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:32:37.095858  299967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 11:32:37.316914  299967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:32:37.339226  299967 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.105139028s)
	I0819 11:32:37.342390  299967 addons.go:475] Verifying addon gcp-auth=true in "addons-288312"
	I0819 11:32:37.345566  299967 out.go:177] * Verifying gcp-auth addon...
	I0819 11:32:37.348323  299967 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0819 11:32:37.412768  299967 kapi.go:86] Found 0 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0819 11:32:37.463326  299967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:32:37.596321  299967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 11:32:37.807319  299967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:32:37.963794  299967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:32:38.096078  299967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 11:32:38.188349  299967 pod_ready.go:103] pod "coredns-6f6b679f8f-rsl4l" in "kube-system" namespace has status "Ready":"False"
	I0819 11:32:38.306728  299967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:32:38.509038  299967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:32:38.596069  299967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 11:32:38.814390  299967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:32:38.970474  299967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:32:39.097814  299967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 11:32:39.305678  299967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:32:39.470628  299967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:32:39.596245  299967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 11:32:39.808416  299967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:32:39.969932  299967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:32:40.097208  299967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 11:32:40.188755  299967 pod_ready.go:103] pod "coredns-6f6b679f8f-rsl4l" in "kube-system" namespace has status "Ready":"False"
	I0819 11:32:40.306654  299967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:32:40.508858  299967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:32:40.595302  299967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 11:32:40.807225  299967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:32:40.965183  299967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:32:41.098928  299967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 11:32:41.306842  299967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:32:41.465957  299967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:32:41.610111  299967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 11:32:41.805713  299967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:32:41.963626  299967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:32:42.096464  299967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 11:32:42.200139  299967 pod_ready.go:103] pod "coredns-6f6b679f8f-rsl4l" in "kube-system" namespace has status "Ready":"False"
	I0819 11:32:42.306128  299967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:32:42.467260  299967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:32:42.604085  299967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 11:32:42.806300  299967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:32:42.963615  299967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:32:43.096068  299967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 11:32:43.307235  299967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:32:43.464726  299967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:32:43.597825  299967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 11:32:43.807114  299967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:32:43.964238  299967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:32:44.096253  299967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 11:32:44.311252  299967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:32:44.464977  299967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:32:44.595146  299967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 11:32:44.686310  299967 pod_ready.go:103] pod "coredns-6f6b679f8f-rsl4l" in "kube-system" namespace has status "Ready":"False"
	I0819 11:32:44.806330  299967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:32:44.963874  299967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:32:45.105353  299967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 11:32:45.307544  299967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:32:45.463348  299967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:32:45.596549  299967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 11:32:45.806305  299967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:32:45.963937  299967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:32:46.098142  299967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 11:32:46.305876  299967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:32:46.463628  299967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:32:46.595452  299967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 11:32:46.686703  299967 pod_ready.go:103] pod "coredns-6f6b679f8f-rsl4l" in "kube-system" namespace has status "Ready":"False"
	I0819 11:32:46.806377  299967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:32:46.962697  299967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:32:47.095217  299967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 11:32:47.306433  299967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:32:47.464124  299967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:32:47.596460  299967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 11:32:47.806236  299967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:32:48.007853  299967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:32:48.107742  299967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 11:32:48.306700  299967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:32:48.464042  299967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:32:48.596507  299967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 11:32:48.693233  299967 pod_ready.go:103] pod "coredns-6f6b679f8f-rsl4l" in "kube-system" namespace has status "Ready":"False"
	I0819 11:32:48.806372  299967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:32:48.964080  299967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:32:49.095829  299967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 11:32:49.309680  299967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:32:49.465990  299967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:32:49.596943  299967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 11:32:49.810567  299967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:32:49.963952  299967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:32:50.105199  299967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 11:32:50.314057  299967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:32:50.463515  299967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:32:50.596073  299967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 11:32:50.805583  299967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:32:50.963490  299967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:32:51.099460  299967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 11:32:51.186816  299967 pod_ready.go:103] pod "coredns-6f6b679f8f-rsl4l" in "kube-system" namespace has status "Ready":"False"
	I0819 11:32:51.311738  299967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:32:51.462957  299967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:32:51.595076  299967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 11:32:51.805686  299967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:32:51.963035  299967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:32:52.095197  299967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 11:32:52.316918  299967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:32:52.463715  299967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:32:52.595630  299967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 11:32:52.806293  299967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:32:52.963371  299967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:32:53.096192  299967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 11:32:53.188162  299967 pod_ready.go:103] pod "coredns-6f6b679f8f-rsl4l" in "kube-system" namespace has status "Ready":"False"
	I0819 11:32:53.306738  299967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:32:53.466008  299967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:32:53.597450  299967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 11:32:53.808398  299967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:32:53.966978  299967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:32:54.107233  299967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 11:32:54.216557  299967 pod_ready.go:93] pod "coredns-6f6b679f8f-rsl4l" in "kube-system" namespace has status "Ready":"True"
	I0819 11:32:54.216632  299967 pod_ready.go:82] duration metric: took 26.536977401s for pod "coredns-6f6b679f8f-rsl4l" in "kube-system" namespace to be "Ready" ...
	I0819 11:32:54.216659  299967 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-288312" in "kube-system" namespace to be "Ready" ...
	I0819 11:32:54.226158  299967 pod_ready.go:93] pod "etcd-addons-288312" in "kube-system" namespace has status "Ready":"True"
	I0819 11:32:54.226228  299967 pod_ready.go:82] duration metric: took 9.547973ms for pod "etcd-addons-288312" in "kube-system" namespace to be "Ready" ...
	I0819 11:32:54.226257  299967 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-288312" in "kube-system" namespace to be "Ready" ...
	I0819 11:32:54.233421  299967 pod_ready.go:93] pod "kube-apiserver-addons-288312" in "kube-system" namespace has status "Ready":"True"
	I0819 11:32:54.233518  299967 pod_ready.go:82] duration metric: took 7.215602ms for pod "kube-apiserver-addons-288312" in "kube-system" namespace to be "Ready" ...
	I0819 11:32:54.233563  299967 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-288312" in "kube-system" namespace to be "Ready" ...
	I0819 11:32:54.240415  299967 pod_ready.go:93] pod "kube-controller-manager-addons-288312" in "kube-system" namespace has status "Ready":"True"
	I0819 11:32:54.240630  299967 pod_ready.go:82] duration metric: took 7.033522ms for pod "kube-controller-manager-addons-288312" in "kube-system" namespace to be "Ready" ...
	I0819 11:32:54.240696  299967 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-qcfrc" in "kube-system" namespace to be "Ready" ...
	I0819 11:32:54.247999  299967 pod_ready.go:93] pod "kube-proxy-qcfrc" in "kube-system" namespace has status "Ready":"True"
	I0819 11:32:54.248112  299967 pod_ready.go:82] duration metric: took 7.379345ms for pod "kube-proxy-qcfrc" in "kube-system" namespace to be "Ready" ...
	I0819 11:32:54.248147  299967 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-288312" in "kube-system" namespace to be "Ready" ...
	I0819 11:32:54.307136  299967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:32:54.470328  299967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:32:54.584852  299967 pod_ready.go:93] pod "kube-scheduler-addons-288312" in "kube-system" namespace has status "Ready":"True"
	I0819 11:32:54.584881  299967 pod_ready.go:82] duration metric: took 336.693376ms for pod "kube-scheduler-addons-288312" in "kube-system" namespace to be "Ready" ...
	I0819 11:32:54.584892  299967 pod_ready.go:39] duration metric: took 27.425084103s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0819 11:32:54.584909  299967 api_server.go:52] waiting for apiserver process to appear ...
	I0819 11:32:54.584974  299967 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 11:32:54.595778  299967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 11:32:54.605370  299967 api_server.go:72] duration metric: took 30.473210786s to wait for apiserver process to appear ...
	I0819 11:32:54.605435  299967 api_server.go:88] waiting for apiserver healthz status ...
	I0819 11:32:54.605470  299967 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0819 11:32:54.613662  299967 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0819 11:32:54.614710  299967 api_server.go:141] control plane version: v1.31.0
	I0819 11:32:54.614762  299967 api_server.go:131] duration metric: took 9.305881ms to wait for apiserver health ...
	I0819 11:32:54.614786  299967 system_pods.go:43] waiting for kube-system pods to appear ...
	I0819 11:32:54.797344  299967 system_pods.go:59] 18 kube-system pods found
	I0819 11:32:54.797391  299967 system_pods.go:61] "coredns-6f6b679f8f-rsl4l" [389f7f28-93d0-4056-a844-e693a4027a4c] Running
	I0819 11:32:54.797402  299967 system_pods.go:61] "csi-hostpath-attacher-0" [86cae515-e9d5-4430-a89b-9b3d4e9c107f] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0819 11:32:54.797411  299967 system_pods.go:61] "csi-hostpath-resizer-0" [f5b56916-f972-4b5c-b9ac-5fc54ee79c06] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0819 11:32:54.797426  299967 system_pods.go:61] "csi-hostpathplugin-fkbb6" [a84f429b-5e6a-4ec0-aaad-13218244da69] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0819 11:32:54.797431  299967 system_pods.go:61] "etcd-addons-288312" [9b5c5af2-2ba9-4e18-bd29-fc6c6bdc8a7b] Running
	I0819 11:32:54.797436  299967 system_pods.go:61] "kindnet-tspnt" [8710f256-aa52-47ce-a04e-a275d5534c33] Running
	I0819 11:32:54.797449  299967 system_pods.go:61] "kube-apiserver-addons-288312" [634a7998-826d-4381-9ce4-f11958d59786] Running
	I0819 11:32:54.797453  299967 system_pods.go:61] "kube-controller-manager-addons-288312" [caa73294-b20c-42b3-b6f3-9b9de947c7a8] Running
	I0819 11:32:54.797465  299967 system_pods.go:61] "kube-ingress-dns-minikube" [709faaf0-bd90-4fc7-8c5a-641f818baa37] Running
	I0819 11:32:54.797473  299967 system_pods.go:61] "kube-proxy-qcfrc" [3093f6f0-0d91-4d3a-b55f-c26ff5c8a9d7] Running
	I0819 11:32:54.797477  299967 system_pods.go:61] "kube-scheduler-addons-288312" [4703c502-1fb3-45b0-a2bb-dabc33c743ef] Running
	I0819 11:32:54.797485  299967 system_pods.go:61] "metrics-server-8988944d9-k867h" [a9885e81-d5f8-4422-84fd-94def24ba0cf] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0819 11:32:54.797493  299967 system_pods.go:61] "nvidia-device-plugin-daemonset-892v8" [e61326b6-6a52-44ff-be2e-4479f137b093] Running
	I0819 11:32:54.797501  299967 system_pods.go:61] "registry-6fb4cdfc84-2w4jh" [21555be8-e2ba-4037-9a5a-a4120f29c7b9] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0819 11:32:54.797505  299967 system_pods.go:61] "registry-proxy-wqcnn" [ca7f7a74-631a-42a9-91cd-00a451340d6b] Running
	I0819 11:32:54.797514  299967 system_pods.go:61] "snapshot-controller-56fcc65765-gb9jf" [9d52137d-5397-4599-973a-111f8e8b825a] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0819 11:32:54.797524  299967 system_pods.go:61] "snapshot-controller-56fcc65765-kxh9s" [44407df4-c5ad-437f-ba84-7dc3e6ec4c61] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0819 11:32:54.797529  299967 system_pods.go:61] "storage-provisioner" [327f3605-c61b-4146-88f1-95b484395d81] Running
	I0819 11:32:54.797542  299967 system_pods.go:74] duration metric: took 182.73867ms to wait for pod list to return data ...
	I0819 11:32:54.797554  299967 default_sa.go:34] waiting for default service account to be created ...
	I0819 11:32:54.805957  299967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:32:54.963029  299967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:32:54.984764  299967 default_sa.go:45] found service account: "default"
	I0819 11:32:54.984833  299967 default_sa.go:55] duration metric: took 187.271509ms for default service account to be created ...
	I0819 11:32:54.984851  299967 system_pods.go:116] waiting for k8s-apps to be running ...
	I0819 11:32:55.095517  299967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 11:32:55.191094  299967 system_pods.go:86] 18 kube-system pods found
	I0819 11:32:55.191131  299967 system_pods.go:89] "coredns-6f6b679f8f-rsl4l" [389f7f28-93d0-4056-a844-e693a4027a4c] Running
	I0819 11:32:55.191142  299967 system_pods.go:89] "csi-hostpath-attacher-0" [86cae515-e9d5-4430-a89b-9b3d4e9c107f] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0819 11:32:55.191151  299967 system_pods.go:89] "csi-hostpath-resizer-0" [f5b56916-f972-4b5c-b9ac-5fc54ee79c06] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0819 11:32:55.191159  299967 system_pods.go:89] "csi-hostpathplugin-fkbb6" [a84f429b-5e6a-4ec0-aaad-13218244da69] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0819 11:32:55.191164  299967 system_pods.go:89] "etcd-addons-288312" [9b5c5af2-2ba9-4e18-bd29-fc6c6bdc8a7b] Running
	I0819 11:32:55.191169  299967 system_pods.go:89] "kindnet-tspnt" [8710f256-aa52-47ce-a04e-a275d5534c33] Running
	I0819 11:32:55.191174  299967 system_pods.go:89] "kube-apiserver-addons-288312" [634a7998-826d-4381-9ce4-f11958d59786] Running
	I0819 11:32:55.191179  299967 system_pods.go:89] "kube-controller-manager-addons-288312" [caa73294-b20c-42b3-b6f3-9b9de947c7a8] Running
	I0819 11:32:55.191190  299967 system_pods.go:89] "kube-ingress-dns-minikube" [709faaf0-bd90-4fc7-8c5a-641f818baa37] Running
	I0819 11:32:55.191195  299967 system_pods.go:89] "kube-proxy-qcfrc" [3093f6f0-0d91-4d3a-b55f-c26ff5c8a9d7] Running
	I0819 11:32:55.191209  299967 system_pods.go:89] "kube-scheduler-addons-288312" [4703c502-1fb3-45b0-a2bb-dabc33c743ef] Running
	I0819 11:32:55.191215  299967 system_pods.go:89] "metrics-server-8988944d9-k867h" [a9885e81-d5f8-4422-84fd-94def24ba0cf] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0819 11:32:55.191220  299967 system_pods.go:89] "nvidia-device-plugin-daemonset-892v8" [e61326b6-6a52-44ff-be2e-4479f137b093] Running
	I0819 11:32:55.191229  299967 system_pods.go:89] "registry-6fb4cdfc84-2w4jh" [21555be8-e2ba-4037-9a5a-a4120f29c7b9] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0819 11:32:55.191237  299967 system_pods.go:89] "registry-proxy-wqcnn" [ca7f7a74-631a-42a9-91cd-00a451340d6b] Running
	I0819 11:32:55.191248  299967 system_pods.go:89] "snapshot-controller-56fcc65765-gb9jf" [9d52137d-5397-4599-973a-111f8e8b825a] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0819 11:32:55.191255  299967 system_pods.go:89] "snapshot-controller-56fcc65765-kxh9s" [44407df4-c5ad-437f-ba84-7dc3e6ec4c61] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0819 11:32:55.191259  299967 system_pods.go:89] "storage-provisioner" [327f3605-c61b-4146-88f1-95b484395d81] Running
	I0819 11:32:55.191273  299967 system_pods.go:126] duration metric: took 206.410381ms to wait for k8s-apps to be running ...
	I0819 11:32:55.191285  299967 system_svc.go:44] waiting for kubelet service to be running ....
	I0819 11:32:55.191356  299967 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 11:32:55.205355  299967 system_svc.go:56] duration metric: took 14.060825ms WaitForService to wait for kubelet
	I0819 11:32:55.205386  299967 kubeadm.go:582] duration metric: took 31.073230544s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0819 11:32:55.205407  299967 node_conditions.go:102] verifying NodePressure condition ...
	I0819 11:32:55.307747  299967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:32:55.385963  299967 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0819 11:32:55.385999  299967 node_conditions.go:123] node cpu capacity is 2
	I0819 11:32:55.386012  299967 node_conditions.go:105] duration metric: took 180.59913ms to run NodePressure ...
	I0819 11:32:55.386026  299967 start.go:241] waiting for startup goroutines ...
	I0819 11:32:55.463927  299967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:32:55.595998  299967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 11:32:55.806991  299967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:32:55.963229  299967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:32:56.095454  299967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 11:32:56.309252  299967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:32:56.463557  299967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:32:56.595395  299967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 11:32:56.806292  299967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:32:57.008770  299967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:32:57.095977  299967 kapi.go:107] duration metric: took 22.004376246s to wait for kubernetes.io/minikube-addons=registry ...
	I0819 11:32:57.307507  299967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:32:57.463015  299967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:32:57.822096  299967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:32:57.963748  299967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:32:58.306362  299967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:32:58.463638  299967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:32:58.810853  299967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:32:58.966069  299967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:32:59.313456  299967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:32:59.464474  299967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:32:59.806308  299967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:32:59.968619  299967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:33:00.315503  299967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:33:00.467476  299967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:33:00.807396  299967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:33:00.969548  299967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:33:01.306395  299967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:33:01.464771  299967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:33:01.806595  299967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:33:01.966358  299967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:33:02.306044  299967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:33:02.507341  299967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:33:02.805748  299967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:33:02.963757  299967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:33:03.314223  299967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:33:03.466911  299967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:33:03.806519  299967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:33:03.964444  299967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:33:04.305919  299967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:33:04.463740  299967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:33:04.806379  299967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:33:04.965153  299967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:33:05.306137  299967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:33:05.462936  299967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:33:05.805929  299967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:33:05.964464  299967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:33:06.305849  299967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:33:06.463383  299967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:33:06.806393  299967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:33:06.964522  299967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:33:07.305905  299967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:33:07.463111  299967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:33:07.805174  299967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:33:07.963597  299967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:33:08.305568  299967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:33:08.465759  299967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:33:08.805612  299967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:33:08.962860  299967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:33:09.306836  299967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:33:09.463770  299967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:33:09.806064  299967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:33:09.968042  299967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:33:10.305892  299967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:33:10.463607  299967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:33:10.807416  299967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:33:10.963658  299967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:33:11.305491  299967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:33:11.463371  299967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:33:11.823410  299967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:33:11.965279  299967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:33:12.305989  299967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:33:12.506347  299967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:33:12.806369  299967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:33:12.990243  299967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:33:13.306862  299967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:33:13.464313  299967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:33:13.806197  299967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:33:13.966414  299967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:33:14.306820  299967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:33:14.464150  299967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:33:14.807936  299967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:33:14.966532  299967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:33:15.306021  299967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:33:15.463968  299967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:33:15.806416  299967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:33:15.963761  299967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:33:16.307054  299967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:33:16.465103  299967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:33:16.806506  299967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:33:16.968905  299967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:33:17.307626  299967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:33:17.467285  299967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:33:17.805858  299967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:33:17.969794  299967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:33:18.307038  299967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:33:18.464068  299967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:33:18.810583  299967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:33:18.963256  299967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:33:19.307807  299967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:33:19.463437  299967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:33:19.807055  299967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:33:19.963661  299967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:33:20.306426  299967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:33:20.464089  299967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:33:20.805512  299967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:33:20.970961  299967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:33:21.307231  299967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:33:21.463226  299967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:33:21.805719  299967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:33:21.962998  299967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:33:22.305428  299967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:33:22.464590  299967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:33:22.806407  299967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:33:22.963708  299967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 11:33:23.307090  299967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:33:23.464278  299967 kapi.go:107] duration metric: took 47.506041851s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0819 11:33:23.805988  299967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:33:24.306233  299967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:33:24.805711  299967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:33:25.305908  299967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:33:25.805608  299967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:33:26.305557  299967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:33:26.806318  299967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:33:27.305373  299967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:33:27.805288  299967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:33:28.305760  299967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:33:28.806158  299967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:33:29.308232  299967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:33:29.806333  299967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:33:30.305392  299967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:33:30.806791  299967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:33:31.306211  299967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:33:31.805715  299967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:33:32.305943  299967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:33:32.806556  299967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:33:33.305353  299967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:33:33.805821  299967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:33:34.306378  299967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:33:34.805949  299967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:33:35.306031  299967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:33:35.806398  299967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:33:36.306445  299967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:33:36.807398  299967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:33:37.306197  299967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:33:37.805394  299967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:33:38.306146  299967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:33:38.806646  299967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:33:39.306163  299967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:33:39.806731  299967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:33:40.306079  299967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:33:40.806083  299967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:33:41.306396  299967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:33:41.808417  299967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 11:33:42.306853  299967 kapi.go:107] duration metric: took 1m9.505609768s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0819 11:34:00.852695  299967 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0819 11:34:00.852718  299967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:34:01.353197  299967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:34:01.852727  299967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:34:02.351991  299967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:34:02.852002  299967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:34:03.352301  299967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:34:03.851975  299967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:34:04.352893  299967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:34:04.852975  299967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:34:05.352754  299967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:34:05.853363  299967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:34:06.352826  299967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:34:06.852054  299967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:34:07.352375  299967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:34:07.852202  299967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:34:08.353054  299967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:34:08.851803  299967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:34:09.352544  299967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:34:09.852639  299967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:34:10.352702  299967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:34:10.852727  299967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:34:11.353119  299967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:34:11.852925  299967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:34:12.352290  299967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:34:12.852309  299967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:34:13.353118  299967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:34:13.851548  299967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:34:14.352989  299967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:34:14.852718  299967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:34:15.352438  299967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:34:15.852017  299967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:34:16.352366  299967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:34:16.852338  299967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:34:17.351644  299967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:34:17.852258  299967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:34:18.351896  299967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:34:18.852874  299967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:34:19.354309  299967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:34:19.852491  299967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:34:20.351881  299967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:34:20.852625  299967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:34:21.351550  299967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:34:21.853196  299967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:34:22.352346  299967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:34:22.851943  299967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:34:23.351729  299967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:34:23.852166  299967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:34:24.351830  299967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:34:24.852379  299967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:34:25.352481  299967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:34:25.853138  299967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:34:26.352738  299967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:34:26.852874  299967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:34:27.351583  299967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:34:27.851680  299967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:34:28.352536  299967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:34:28.853284  299967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:34:29.351790  299967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:34:29.855243  299967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:34:30.352227  299967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:34:30.851760  299967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:34:31.355069  299967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:34:31.852422  299967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:34:32.352399  299967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:34:32.852328  299967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:34:33.351649  299967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:34:33.852464  299967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:34:34.352746  299967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:34:34.852391  299967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:34:35.352171  299967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:34:35.852265  299967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:34:36.351648  299967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:34:36.852791  299967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:34:37.352617  299967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:34:37.852353  299967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:34:38.351466  299967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:34:38.852818  299967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:34:39.355398  299967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:34:39.852379  299967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:34:40.353261  299967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:34:40.853480  299967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:34:41.354603  299967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:34:41.853041  299967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:34:42.353004  299967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:34:42.852423  299967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:34:43.352486  299967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:34:43.853002  299967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:34:44.352038  299967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:34:44.852363  299967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:34:45.352677  299967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:34:45.852044  299967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:34:46.351513  299967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:34:46.852194  299967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:34:47.351564  299967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:34:47.852322  299967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:34:48.351922  299967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:34:48.851788  299967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:34:49.353032  299967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:34:49.852586  299967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:34:50.352491  299967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:34:50.852619  299967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:34:51.351724  299967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:34:51.852634  299967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:34:52.352457  299967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:34:52.852304  299967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:34:53.352386  299967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:34:53.852314  299967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:34:54.352188  299967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:34:54.852067  299967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:34:55.351856  299967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:34:55.851807  299967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:34:56.351625  299967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:34:56.852322  299967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:34:57.352064  299967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:34:57.851656  299967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:34:58.352586  299967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:34:58.852233  299967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:34:59.352541  299967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:34:59.852648  299967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:35:00.367851  299967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:35:00.852481  299967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:35:01.352167  299967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:35:01.851654  299967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:35:02.352726  299967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:35:02.852756  299967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:35:03.352854  299967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:35:03.852097  299967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:35:04.351660  299967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:35:04.852564  299967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:35:05.352920  299967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:35:05.853483  299967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:35:06.352859  299967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 11:35:06.852321  299967 kapi.go:107] duration metric: took 2m29.503997036s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0819 11:35:06.854433  299967 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-288312 cluster.
	I0819 11:35:06.856637  299967 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0819 11:35:06.858302  299967 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0819 11:35:06.860116  299967 out.go:177] * Enabled addons: cloud-spanner, storage-provisioner, nvidia-device-plugin, default-storageclass, storage-provisioner-rancher, volcano, ingress-dns, metrics-server, inspektor-gadget, yakd, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I0819 11:35:06.862054  299967 addons.go:510] duration metric: took 2m42.729485822s for enable addons: enabled=[cloud-spanner storage-provisioner nvidia-device-plugin default-storageclass storage-provisioner-rancher volcano ingress-dns metrics-server inspektor-gadget yakd volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I0819 11:35:06.862107  299967 start.go:246] waiting for cluster config update ...
	I0819 11:35:06.862127  299967 start.go:255] writing updated cluster config ...
	I0819 11:35:06.862401  299967 ssh_runner.go:195] Run: rm -f paused
	I0819 11:35:07.204326  299967 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0819 11:35:07.206978  299967 out.go:177] * Done! kubectl is now configured to use "addons-288312" cluster and "default" namespace by default
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                                     ATTEMPT             POD ID              POD
	8828628086b96       e2d3313f65753       2 minutes ago       Exited              gadget                                   5                   561e9ee749e15       gadget-7j5j6
	f426e1c64c2f7       6ef582f3ec844       3 minutes ago       Running             gcp-auth                                 0                   b0646da007e44       gcp-auth-89d5ffd79-tj9hf
	44a34e4e463bd       8b46b1cd48760       4 minutes ago       Running             admission                                0                   6692a7ed3d7ea       volcano-admission-77d7d48b68-6wjtz
	d9086a7147868       289a818c8d9c5       4 minutes ago       Running             controller                               0                   b63e494a13e6a       ingress-nginx-controller-bc57996ff-bs64p
	5624b86db2907       ee6d597e62dc8       5 minutes ago       Running             csi-snapshotter                          0                   d6ed46a74fba1       csi-hostpathplugin-fkbb6
	7490070d5e2e7       642ded511e141       5 minutes ago       Running             csi-provisioner                          0                   d6ed46a74fba1       csi-hostpathplugin-fkbb6
	a12491e1e418e       922312104da8a       5 minutes ago       Running             liveness-probe                           0                   d6ed46a74fba1       csi-hostpathplugin-fkbb6
	f0d4d04ef9268       08f6b2990811a       5 minutes ago       Running             hostpath                                 0                   d6ed46a74fba1       csi-hostpathplugin-fkbb6
	a45cae3aa9516       0107d56dbc0be       5 minutes ago       Running             node-driver-registrar                    0                   d6ed46a74fba1       csi-hostpathplugin-fkbb6
	888b86c02c096       487fa743e1e22       5 minutes ago       Running             csi-resizer                              0                   6c7d56b83af30       csi-hostpath-resizer-0
	353c753585395       420193b27261a       5 minutes ago       Exited              patch                                    0                   2cb356378f478       ingress-nginx-admission-patch-txtdn
	74591bdfe0b5c       420193b27261a       5 minutes ago       Exited              create                                   0                   cd98ca4d02291       ingress-nginx-admission-create-r2nlk
	37b3c3aabf2b1       1461903ec4fe9       5 minutes ago       Running             csi-external-health-monitor-controller   0                   d6ed46a74fba1       csi-hostpathplugin-fkbb6
	b08a168a40285       d9c7ad4c226bf       5 minutes ago       Running             volcano-scheduler                        0                   d344f53f62572       volcano-scheduler-576bc46687-9ptz5
	cecf242b72cdb       1505f556b3a7b       5 minutes ago       Running             volcano-controllers                      0                   4f9b4f165333b       volcano-controllers-56675bb4d5-z7wgv
	90efc18801549       4d1e5c3e97420       5 minutes ago       Running             volume-snapshot-controller               0                   8d671db3375e9       snapshot-controller-56fcc65765-kxh9s
	39e14d742db4e       9a80d518f102c       5 minutes ago       Running             csi-attacher                             0                   b2bb7b0097e89       csi-hostpath-attacher-0
	eb350e883cfc7       4d1e5c3e97420       5 minutes ago       Running             volume-snapshot-controller               0                   50e4553b7fd87       snapshot-controller-56fcc65765-gb9jf
	27faf44cbf203       77bdba588b953       5 minutes ago       Running             yakd                                     0                   f88950976d3fc       yakd-dashboard-67d98fc6b-snwj8
	b97afd8be583b       95dccb4df54ab       5 minutes ago       Running             metrics-server                           0                   ae4c5e90e9777       metrics-server-8988944d9-k867h
	94aa0736949e2       7ce2150c8929b       5 minutes ago       Running             local-path-provisioner                   0                   a9d3cf4715250       local-path-provisioner-86d989889c-5pjrk
	2dbd60a6105ec       6fed88f43b276       5 minutes ago       Running             registry                                 0                   45371abe14ab4       registry-6fb4cdfc84-2w4jh
	454cee1c16cb3       53af6e2c4c343       5 minutes ago       Running             cloud-spanner-emulator                   0                   c2a6a8dd24678       cloud-spanner-emulator-c4bc9b5f8-p24hn
	9ef03ac09f9e8       2437cf7621777       5 minutes ago       Running             coredns                                  0                   33d352a138a6a       coredns-6f6b679f8f-rsl4l
	d33ba70e9dc4e       3410e1561990a       5 minutes ago       Running             registry-proxy                           0                   67a940d1fe056       registry-proxy-wqcnn
	f1d96f378723c       a9bac31a5be8d       5 minutes ago       Running             nvidia-device-plugin-ctr                 0                   a3c16813f7ecd       nvidia-device-plugin-daemonset-892v8
	1705c0dcaecc8       35508c2f890c4       5 minutes ago       Running             minikube-ingress-dns                     0                   4125637772436       kube-ingress-dns-minikube
	0e8d2510b90b3       ba04bb24b9575       5 minutes ago       Running             storage-provisioner                      0                   7a91c0cd255e5       storage-provisioner
	bd4c07d04fb0f       6a23fa8fd2b78       5 minutes ago       Running             kindnet-cni                              0                   f06f26068e045       kindnet-tspnt
	c9200b570f85c       71d55d66fd4ee       6 minutes ago       Running             kube-proxy                               0                   e8b724a9b6469       kube-proxy-qcfrc
	9b772b5dd68a5       fcb0683e6bdbd       6 minutes ago       Running             kube-controller-manager                  0                   4d15e6cc5bb54       kube-controller-manager-addons-288312
	d9414d4d53ad1       fbbbd428abb4d       6 minutes ago       Running             kube-scheduler                           0                   adbafbc98f4c3       kube-scheduler-addons-288312
	53730afdf767b       cd0f0ae0ec9e0       6 minutes ago       Running             kube-apiserver                           0                   06f46b7c9b470       kube-apiserver-addons-288312
	912ce0cbd02f0       27e3830e14027       6 minutes ago       Running             etcd                                     0                   506f462db582d       etcd-addons-288312
	
	
	==> containerd <==
	Aug 19 11:35:19 addons-288312 containerd[817]: time="2024-08-19T11:35:19.525575178Z" level=info msg="RemovePodSandbox \"3c19fafd825d1963aab21d9453a6b79788b378c01bc415a0eb11d8aeb9d77ddb\" returns successfully"
	Aug 19 11:35:52 addons-288312 containerd[817]: time="2024-08-19T11:35:52.395260439Z" level=info msg="PullImage \"ghcr.io/inspektor-gadget/inspektor-gadget:v0.31.0@sha256:6b2f7ac9fe6f547cfa541d9217f03da0d0c4615b561d5455a23d0edbbd607ecc\""
	Aug 19 11:35:52 addons-288312 containerd[817]: time="2024-08-19T11:35:52.517414921Z" level=info msg="ImageUpdate event name:\"ghcr.io/inspektor-gadget/inspektor-gadget@sha256:6b2f7ac9fe6f547cfa541d9217f03da0d0c4615b561d5455a23d0edbbd607ecc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Aug 19 11:35:52 addons-288312 containerd[817]: time="2024-08-19T11:35:52.519011734Z" level=info msg="stop pulling image ghcr.io/inspektor-gadget/inspektor-gadget@sha256:6b2f7ac9fe6f547cfa541d9217f03da0d0c4615b561d5455a23d0edbbd607ecc: active requests=0, bytes read=89"
	Aug 19 11:35:52 addons-288312 containerd[817]: time="2024-08-19T11:35:52.522501826Z" level=info msg="Pulled image \"ghcr.io/inspektor-gadget/inspektor-gadget:v0.31.0@sha256:6b2f7ac9fe6f547cfa541d9217f03da0d0c4615b561d5455a23d0edbbd607ecc\" with image id \"sha256:e2d3313f65753f82428cf312f6e4b9237983de19680bde57ca1c0935cadbe630\", repo tag \"\", repo digest \"ghcr.io/inspektor-gadget/inspektor-gadget@sha256:6b2f7ac9fe6f547cfa541d9217f03da0d0c4615b561d5455a23d0edbbd607ecc\", size \"69907666\" in 127.191723ms"
	Aug 19 11:35:52 addons-288312 containerd[817]: time="2024-08-19T11:35:52.522556634Z" level=info msg="PullImage \"ghcr.io/inspektor-gadget/inspektor-gadget:v0.31.0@sha256:6b2f7ac9fe6f547cfa541d9217f03da0d0c4615b561d5455a23d0edbbd607ecc\" returns image reference \"sha256:e2d3313f65753f82428cf312f6e4b9237983de19680bde57ca1c0935cadbe630\""
	Aug 19 11:35:52 addons-288312 containerd[817]: time="2024-08-19T11:35:52.524833671Z" level=info msg="CreateContainer within sandbox \"561e9ee749e153d5280864a0eeb1322cafdf4daec8f44e88e41fb65389886a58\" for container &ContainerMetadata{Name:gadget,Attempt:5,}"
	Aug 19 11:35:52 addons-288312 containerd[817]: time="2024-08-19T11:35:52.543819521Z" level=info msg="CreateContainer within sandbox \"561e9ee749e153d5280864a0eeb1322cafdf4daec8f44e88e41fb65389886a58\" for &ContainerMetadata{Name:gadget,Attempt:5,} returns container id \"8828628086b9661f99e6d6c803ba1237ffbf98d709abe38e78c1845a491893c5\""
	Aug 19 11:35:52 addons-288312 containerd[817]: time="2024-08-19T11:35:52.544602076Z" level=info msg="StartContainer for \"8828628086b9661f99e6d6c803ba1237ffbf98d709abe38e78c1845a491893c5\""
	Aug 19 11:35:52 addons-288312 containerd[817]: time="2024-08-19T11:35:52.610360614Z" level=info msg="StartContainer for \"8828628086b9661f99e6d6c803ba1237ffbf98d709abe38e78c1845a491893c5\" returns successfully"
	Aug 19 11:35:53 addons-288312 containerd[817]: time="2024-08-19T11:35:53.887053439Z" level=info msg="shim disconnected" id=8828628086b9661f99e6d6c803ba1237ffbf98d709abe38e78c1845a491893c5 namespace=k8s.io
	Aug 19 11:35:53 addons-288312 containerd[817]: time="2024-08-19T11:35:53.887116870Z" level=warning msg="cleaning up after shim disconnected" id=8828628086b9661f99e6d6c803ba1237ffbf98d709abe38e78c1845a491893c5 namespace=k8s.io
	Aug 19 11:35:53 addons-288312 containerd[817]: time="2024-08-19T11:35:53.887129210Z" level=info msg="cleaning up dead shim" namespace=k8s.io
	Aug 19 11:35:54 addons-288312 containerd[817]: time="2024-08-19T11:35:54.594523989Z" level=info msg="RemoveContainer for \"d34138318334f9bc9909abc695d1a858906a10c0fd7b95c06885c31846378e92\""
	Aug 19 11:35:54 addons-288312 containerd[817]: time="2024-08-19T11:35:54.601922974Z" level=info msg="RemoveContainer for \"d34138318334f9bc9909abc695d1a858906a10c0fd7b95c06885c31846378e92\" returns successfully"
	Aug 19 11:36:19 addons-288312 containerd[817]: time="2024-08-19T11:36:19.529855291Z" level=info msg="RemoveContainer for \"4a63889b49d6ff12fc0cadf6b5c5e8206495cb8d69bd536f7064200bddb6149d\""
	Aug 19 11:36:19 addons-288312 containerd[817]: time="2024-08-19T11:36:19.535969941Z" level=info msg="RemoveContainer for \"4a63889b49d6ff12fc0cadf6b5c5e8206495cb8d69bd536f7064200bddb6149d\" returns successfully"
	Aug 19 11:36:19 addons-288312 containerd[817]: time="2024-08-19T11:36:19.537804336Z" level=info msg="StopPodSandbox for \"c41da5f111710dcb3e6cc5752e797c815d285fdf801cdc4760fe3f3460147ded\""
	Aug 19 11:36:19 addons-288312 containerd[817]: time="2024-08-19T11:36:19.545513864Z" level=info msg="TearDown network for sandbox \"c41da5f111710dcb3e6cc5752e797c815d285fdf801cdc4760fe3f3460147ded\" successfully"
	Aug 19 11:36:19 addons-288312 containerd[817]: time="2024-08-19T11:36:19.545558662Z" level=info msg="StopPodSandbox for \"c41da5f111710dcb3e6cc5752e797c815d285fdf801cdc4760fe3f3460147ded\" returns successfully"
	Aug 19 11:36:19 addons-288312 containerd[817]: time="2024-08-19T11:36:19.546064025Z" level=info msg="RemovePodSandbox for \"c41da5f111710dcb3e6cc5752e797c815d285fdf801cdc4760fe3f3460147ded\""
	Aug 19 11:36:19 addons-288312 containerd[817]: time="2024-08-19T11:36:19.546108183Z" level=info msg="Forcibly stopping sandbox \"c41da5f111710dcb3e6cc5752e797c815d285fdf801cdc4760fe3f3460147ded\""
	Aug 19 11:36:19 addons-288312 containerd[817]: time="2024-08-19T11:36:19.562675182Z" level=info msg="TearDown network for sandbox \"c41da5f111710dcb3e6cc5752e797c815d285fdf801cdc4760fe3f3460147ded\" successfully"
	Aug 19 11:36:19 addons-288312 containerd[817]: time="2024-08-19T11:36:19.569165016Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c41da5f111710dcb3e6cc5752e797c815d285fdf801cdc4760fe3f3460147ded\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus."
	Aug 19 11:36:19 addons-288312 containerd[817]: time="2024-08-19T11:36:19.569470648Z" level=info msg="RemovePodSandbox \"c41da5f111710dcb3e6cc5752e797c815d285fdf801cdc4760fe3f3460147ded\" returns successfully"
	
	
	==> coredns [9ef03ac09f9e868a7698035183d980ec1d7c7cbe8f95268fcb5e34368df76986] <==
	CoreDNS-1.11.1
	linux/arm64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:44338 - 17613 "HINFO IN 527507795809058502.1927678538817802423. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.020302657s
	[INFO] 10.244.0.2:59035 - 3577 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000229136s
	[INFO] 10.244.0.2:59035 - 50173 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000080572s
	[INFO] 10.244.0.2:52334 - 18535 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000187119s
	[INFO] 10.244.0.2:52334 - 58477 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000085387s
	[INFO] 10.244.0.2:44625 - 40407 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.00009045s
	[INFO] 10.244.0.2:44625 - 29397 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000088168s
	[INFO] 10.244.0.2:53087 - 3926 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000099557s
	[INFO] 10.244.0.2:53087 - 3672 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000080234s
	[INFO] 10.244.0.2:49457 - 17972 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001974378s
	[INFO] 10.244.0.2:49457 - 31794 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001901282s
	[INFO] 10.244.0.2:47340 - 24271 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000077945s
	[INFO] 10.244.0.2:47340 - 42178 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000040351s
	[INFO] 10.244.0.24:36891 - 54789 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.003954948s
	[INFO] 10.244.0.24:60857 - 64594 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.003974245s
	[INFO] 10.244.0.24:43221 - 45872 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.00013721s
	[INFO] 10.244.0.24:50767 - 51818 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000108386s
	[INFO] 10.244.0.24:45875 - 31742 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000096596s
	[INFO] 10.244.0.24:46924 - 49616 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000092993s
	[INFO] 10.244.0.24:37415 - 39386 "AAAA IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.002407302s
	[INFO] 10.244.0.24:44886 - 60422 "A IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.00214093s
	[INFO] 10.244.0.24:51187 - 57229 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.000807423s
	[INFO] 10.244.0.24:60468 - 39176 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 610 0.000810105s
	
	
	==> describe nodes <==
	Name:               addons-288312
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-288312
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=7871dd89d2a8218fd3bbcc542b116f963c0d9934
	                    minikube.k8s.io/name=addons-288312
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_19T11_32_20_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-288312
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-288312"}
	                    kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 19 Aug 2024 11:32:17 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-288312
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 19 Aug 2024 11:38:26 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 19 Aug 2024 11:35:22 +0000   Mon, 19 Aug 2024 11:32:13 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 19 Aug 2024 11:35:22 +0000   Mon, 19 Aug 2024 11:32:13 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 19 Aug 2024 11:35:22 +0000   Mon, 19 Aug 2024 11:32:13 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 19 Aug 2024 11:35:22 +0000   Mon, 19 Aug 2024 11:32:17 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-288312
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022364Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022364Ki
	  pods:               110
	System Info:
	  Machine ID:                 327ed6179bbb4b2892f269f92501bf75
	  System UUID:                b87e6860-106a-4a24-b901-c86d9d51a1f3
	  Boot ID:                    e46e48f2-e1cc-40c1-bc17-f5e6b67a31cd
	  Kernel Version:             5.15.0-1067-aws
	  OS Image:                   Ubuntu 22.04.4 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  containerd://1.7.20
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (27 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     cloud-spanner-emulator-c4bc9b5f8-p24hn      0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m
	  gadget                      gadget-7j5j6                                0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m56s
	  gcp-auth                    gcp-auth-89d5ffd79-tj9hf                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m27s
	  ingress-nginx               ingress-nginx-controller-bc57996ff-bs64p    100m (5%)     0 (0%)      90Mi (1%)        0 (0%)         5m55s
	  kube-system                 coredns-6f6b679f8f-rsl4l                    100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     6m3s
	  kube-system                 csi-hostpath-attacher-0                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m52s
	  kube-system                 csi-hostpath-resizer-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m52s
	  kube-system                 csi-hostpathplugin-fkbb6                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m52s
	  kube-system                 etcd-addons-288312                          100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         6m8s
	  kube-system                 kindnet-tspnt                               100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      6m3s
	  kube-system                 kube-apiserver-addons-288312                250m (12%)    0 (0%)      0 (0%)           0 (0%)         6m8s
	  kube-system                 kube-controller-manager-addons-288312       200m (10%)    0 (0%)      0 (0%)           0 (0%)         6m8s
	  kube-system                 kube-ingress-dns-minikube                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m58s
	  kube-system                 kube-proxy-qcfrc                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m3s
	  kube-system                 kube-scheduler-addons-288312                100m (5%)     0 (0%)      0 (0%)           0 (0%)         6m8s
	  kube-system                 metrics-server-8988944d9-k867h              100m (5%)     0 (0%)      200Mi (2%)       0 (0%)         5m57s
	  kube-system                 nvidia-device-plugin-daemonset-892v8        0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m
	  kube-system                 registry-6fb4cdfc84-2w4jh                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m58s
	  kube-system                 registry-proxy-wqcnn                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m58s
	  kube-system                 snapshot-controller-56fcc65765-gb9jf        0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m56s
	  kube-system                 snapshot-controller-56fcc65765-kxh9s        0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m56s
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m59s
	  local-path-storage          local-path-provisioner-86d989889c-5pjrk     0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m57s
	  volcano-system              volcano-admission-77d7d48b68-6wjtz          0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m54s
	  volcano-system              volcano-controllers-56675bb4d5-z7wgv        0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m53s
	  volcano-system              volcano-scheduler-576bc46687-9ptz5          0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m53s
	  yakd-dashboard              yakd-dashboard-67d98fc6b-snwj8              0 (0%)        0 (0%)      128Mi (1%)       256Mi (3%)     5m57s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1050m (52%)  100m (5%)
	  memory             638Mi (8%)   476Mi (6%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	  hugepages-32Mi     0 (0%)       0 (0%)
	  hugepages-64Ki     0 (0%)       0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 6m1s                   kube-proxy       
	  Normal   NodeHasSufficientMemory  6m15s (x8 over 6m15s)  kubelet          Node addons-288312 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    6m15s (x7 over 6m15s)  kubelet          Node addons-288312 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     6m15s (x7 over 6m15s)  kubelet          Node addons-288312 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  6m15s                  kubelet          Updated Node Allocatable limit across pods
	  Normal   Starting                 6m8s                   kubelet          Starting kubelet.
	  Warning  CgroupV1                 6m8s                   kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeAllocatableEnforced  6m8s                   kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  6m8s                   kubelet          Node addons-288312 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    6m8s                   kubelet          Node addons-288312 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     6m8s                   kubelet          Node addons-288312 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           6m4s                   node-controller  Node addons-288312 event: Registered Node addons-288312 in Controller
	
	
	==> dmesg <==
	[Aug19 10:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.014023] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.475601] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.065531] systemd[1]: /lib/systemd/system/cloud-init-local.service:15: Unknown key name 'ConditionEnvironment' in section 'Unit', ignoring.
	[  +0.002573] systemd[1]: /lib/systemd/system/cloud-init.service:19: Unknown key name 'ConditionEnvironment' in section 'Unit', ignoring.
	[  +0.018918] systemd[1]: /lib/systemd/system/cloud-init.target:15: Unknown key name 'ConditionEnvironment' in section 'Unit', ignoring.
	[  +0.004637] systemd[1]: /lib/systemd/system/cloud-final.service:9: Unknown key name 'ConditionEnvironment' in section 'Unit', ignoring.
	[  +0.004032] systemd[1]: /lib/systemd/system/cloud-config.service:8: Unknown key name 'ConditionEnvironment' in section 'Unit', ignoring.
	[  +0.688432] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +5.937418] kauditd_printk_skb: 36 callbacks suppressed
	[Aug19 10:41] hrtimer: interrupt took 13643356 ns
	[Aug19 11:00] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	
	
	==> etcd [912ce0cbd02f0f9f43800354684e738831e57742ebb1ca0541e13525ea6ae564] <==
	{"level":"info","ts":"2024-08-19T11:32:12.754521Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-08-19T11:32:12.754601Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2024-08-19T11:32:12.754622Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2024-08-19T11:32:12.754710Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"aec36adc501070cc","initial-advertise-peer-urls":["https://192.168.49.2:2380"],"listen-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.49.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-08-19T11:32:12.754731Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-08-19T11:32:13.138926Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc is starting a new election at term 1"}
	{"level":"info","ts":"2024-08-19T11:32:13.139063Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 1"}
	{"level":"info","ts":"2024-08-19T11:32:13.139139Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 1"}
	{"level":"info","ts":"2024-08-19T11:32:13.139191Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 2"}
	{"level":"info","ts":"2024-08-19T11:32:13.139223Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2"}
	{"level":"info","ts":"2024-08-19T11:32:13.139268Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 2"}
	{"level":"info","ts":"2024-08-19T11:32:13.139305Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2"}
	{"level":"info","ts":"2024-08-19T11:32:13.143006Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-19T11:32:13.147097Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:addons-288312 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2024-08-19T11:32:13.147334Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-19T11:32:13.147513Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-19T11:32:13.147602Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-19T11:32:13.147658Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-19T11:32:13.147704Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-19T11:32:13.148592Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-19T11:32:13.151829Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-08-19T11:32:13.161076Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-08-19T11:32:13.161883Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-08-19T11:32:13.190818Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-19T11:32:13.192864Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	
	
	==> gcp-auth [f426e1c64c2f76736776c24d47e1a584c37f3e19d6282f526c6a0dc5e374cc6d] <==
	2024/08/19 11:35:06 GCP Auth Webhook started!
	2024/08/19 11:35:25 Ready to marshal response ...
	2024/08/19 11:35:25 Ready to write response ...
	2024/08/19 11:35:25 Ready to marshal response ...
	2024/08/19 11:35:25 Ready to write response ...
	
	
	==> kernel <==
	 11:38:28 up  1:20,  0 users,  load average: 0.25, 1.36, 2.35
	Linux addons-288312 5.15.0-1067-aws #73~20.04.1-Ubuntu SMP Wed Jul 24 17:31:05 UTC 2024 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.4 LTS"
	
	
	==> kindnet [bd4c07d04fb0f88c06c90975dc39fc8dad7ef7658a4234f1c388f1b3427f70e0] <==
	I0819 11:37:08.432718       1 main.go:299] handling current node
	I0819 11:37:18.432843       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0819 11:37:18.432881       1 main.go:299] handling current node
	W0819 11:37:25.933638       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: failed to list *v1.NetworkPolicy: networkpolicies.networking.k8s.io is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "networkpolicies" in API group "networking.k8s.io" at the cluster scope
	E0819 11:37:25.933690       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.NetworkPolicy: failed to list *v1.NetworkPolicy: networkpolicies.networking.k8s.io is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "networkpolicies" in API group "networking.k8s.io" at the cluster scope
	I0819 11:37:28.432569       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0819 11:37:28.432675       1 main.go:299] handling current node
	W0819 11:37:29.157800       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "namespaces" in API group "" at the cluster scope
	E0819 11:37:29.157839       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "namespaces" in API group "" at the cluster scope
	W0819 11:37:32.155844       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: failed to list *v1.Pod: pods is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "pods" in API group "" at the cluster scope
	E0819 11:37:32.155883       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "pods" in API group "" at the cluster scope
	I0819 11:37:38.432934       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0819 11:37:38.432972       1 main.go:299] handling current node
	I0819 11:37:48.432726       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0819 11:37:48.432774       1 main.go:299] handling current node
	I0819 11:37:58.432977       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0819 11:37:58.433017       1 main.go:299] handling current node
	W0819 11:38:08.278795       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "namespaces" in API group "" at the cluster scope
	E0819 11:38:08.278829       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "namespaces" in API group "" at the cluster scope
	I0819 11:38:08.433224       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0819 11:38:08.433263       1 main.go:299] handling current node
	I0819 11:38:18.432588       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0819 11:38:18.432629       1 main.go:299] handling current node
	W0819 11:38:25.699253       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: failed to list *v1.NetworkPolicy: networkpolicies.networking.k8s.io is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "networkpolicies" in API group "networking.k8s.io" at the cluster scope
	E0819 11:38:25.699374       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.NetworkPolicy: failed to list *v1.NetworkPolicy: networkpolicies.networking.k8s.io is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "networkpolicies" in API group "networking.k8s.io" at the cluster scope
	
	
	==> kube-apiserver [53730afdf767bf33dc81a106f67e5871be88a60aa4655282fe0bfe58876a73c4] <==
	W0819 11:33:38.254981       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.111.22.9:443: connect: connection refused
	W0819 11:33:39.300337       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.111.22.9:443: connect: connection refused
	W0819 11:33:40.308186       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.111.22.9:443: connect: connection refused
	W0819 11:33:40.333414       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.108.131.229:443: connect: connection refused
	E0819 11:33:40.333456       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.108.131.229:443: connect: connection refused" logger="UnhandledError"
	W0819 11:33:40.335104       1 dispatcher.go:225] Failed calling webhook, failing closed mutatepod.volcano.sh: failed calling webhook "mutatepod.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/pods/mutate?timeout=10s": dial tcp 10.111.22.9:443: connect: connection refused
	W0819 11:33:40.377932       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.108.131.229:443: connect: connection refused
	E0819 11:33:40.377983       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.108.131.229:443: connect: connection refused" logger="UnhandledError"
	W0819 11:33:40.379580       1 dispatcher.go:225] Failed calling webhook, failing closed mutatepod.volcano.sh: failed calling webhook "mutatepod.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/pods/mutate?timeout=10s": dial tcp 10.111.22.9:443: connect: connection refused
	W0819 11:33:41.325658       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.111.22.9:443: connect: connection refused
	W0819 11:33:42.359120       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.111.22.9:443: connect: connection refused
	W0819 11:33:43.364121       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.111.22.9:443: connect: connection refused
	W0819 11:33:44.467039       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.111.22.9:443: connect: connection refused
	W0819 11:33:45.535020       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.111.22.9:443: connect: connection refused
	W0819 11:33:46.582608       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.111.22.9:443: connect: connection refused
	W0819 11:33:47.621245       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.111.22.9:443: connect: connection refused
	W0819 11:33:48.659680       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.111.22.9:443: connect: connection refused
	W0819 11:34:00.366236       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.108.131.229:443: connect: connection refused
	E0819 11:34:00.366280       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.108.131.229:443: connect: connection refused" logger="UnhandledError"
	W0819 11:34:40.343895       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.108.131.229:443: connect: connection refused
	E0819 11:34:40.343938       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.108.131.229:443: connect: connection refused" logger="UnhandledError"
	W0819 11:34:40.386572       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.108.131.229:443: connect: connection refused
	E0819 11:34:40.386873       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.108.131.229:443: connect: connection refused" logger="UnhandledError"
	I0819 11:35:25.739217       1 controller.go:615] quota admission added evaluator for: jobs.batch.volcano.sh
	I0819 11:35:25.773239       1 controller.go:615] quota admission added evaluator for: podgroups.scheduling.volcano.sh
	
	
	==> kube-controller-manager [9b772b5dd68a589c09d16aedc7038c253707edeb032c7241934aeb2af93489d5] <==
	I0819 11:34:40.384043       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="1s"
	I0819 11:34:40.397700       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I0819 11:34:40.406394       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I0819 11:34:40.410292       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I0819 11:34:40.426081       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I0819 11:34:41.368498       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I0819 11:34:41.388012       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="1s"
	I0819 11:34:42.384654       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I0819 11:34:42.481820       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="1s"
	I0819 11:34:43.394531       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I0819 11:34:43.475512       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I0819 11:34:43.488704       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="1s"
	I0819 11:34:43.512011       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="1s"
	I0819 11:34:43.535785       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="1s"
	I0819 11:34:44.400586       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I0819 11:34:44.409603       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I0819 11:34:44.416162       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I0819 11:35:06.483599       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="gcp-auth/gcp-auth-89d5ffd79" duration="16.582194ms"
	I0819 11:35:06.485013       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="gcp-auth/gcp-auth-89d5ffd79" duration="47.851µs"
	I0819 11:35:13.027086       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="0s"
	I0819 11:35:13.064702       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="0s"
	I0819 11:35:14.008525       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="0s"
	I0819 11:35:14.050435       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="0s"
	I0819 11:35:22.567545       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="addons-288312"
	I0819 11:35:25.463180       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="volcano-system/volcano-admission-init" delay="0s"
	
	
	==> kube-proxy [c9200b570f85c6afe75359e15d08b5689c3bc4bdf42513fab04f3c0d51086839] <==
	I0819 11:32:25.990340       1 server_linux.go:66] "Using iptables proxy"
	I0819 11:32:26.088328       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	E0819 11:32:26.088398       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0819 11:32:26.143715       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0819 11:32:26.143778       1 server_linux.go:169] "Using iptables Proxier"
	I0819 11:32:26.145869       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0819 11:32:26.146319       1 server.go:483] "Version info" version="v1.31.0"
	I0819 11:32:26.146335       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0819 11:32:26.170670       1 config.go:197] "Starting service config controller"
	I0819 11:32:26.170705       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0819 11:32:26.170726       1 config.go:104] "Starting endpoint slice config controller"
	I0819 11:32:26.170731       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0819 11:32:26.173693       1 config.go:326] "Starting node config controller"
	I0819 11:32:26.173720       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0819 11:32:26.271605       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0819 11:32:26.271656       1 shared_informer.go:320] Caches are synced for service config
	I0819 11:32:26.273952       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [d9414d4d53ad145fb71648813961eb7146f0b1050c3f29a8be01691362535287] <==
	W0819 11:32:17.145240       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0819 11:32:17.145265       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0819 11:32:17.147221       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0819 11:32:17.147261       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0819 11:32:17.147450       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0819 11:32:17.147476       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0819 11:32:17.149679       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0819 11:32:17.149724       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0819 11:32:17.149900       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0819 11:32:17.149927       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0819 11:32:17.150119       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0819 11:32:17.150146       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0819 11:32:17.949028       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0819 11:32:17.949293       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0819 11:32:17.957054       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0819 11:32:17.957299       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0819 11:32:18.017312       1 reflector.go:561] runtime/asm_arm64.s:1222: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0819 11:32:18.017359       1 reflector.go:158] "Unhandled Error" err="runtime/asm_arm64.s:1222: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0819 11:32:18.067785       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0819 11:32:18.067897       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0819 11:32:18.072484       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0819 11:32:18.072744       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0819 11:32:18.255636       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0819 11:32:18.255882       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I0819 11:32:20.327497       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Aug 19 11:36:19 addons-288312 kubelet[1485]: I0819 11:36:19.528310    1485 scope.go:117] "RemoveContainer" containerID="4a63889b49d6ff12fc0cadf6b5c5e8206495cb8d69bd536f7064200bddb6149d"
	Aug 19 11:36:26 addons-288312 kubelet[1485]: I0819 11:36:26.393947    1485 scope.go:117] "RemoveContainer" containerID="8828628086b9661f99e6d6c803ba1237ffbf98d709abe38e78c1845a491893c5"
	Aug 19 11:36:26 addons-288312 kubelet[1485]: E0819 11:36:26.394173    1485 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=gadget pod=gadget-7j5j6_gadget(d8a09e01-a318-4543-8ef9-09bd8e0603da)\"" pod="gadget/gadget-7j5j6" podUID="d8a09e01-a318-4543-8ef9-09bd8e0603da"
	Aug 19 11:36:27 addons-288312 kubelet[1485]: I0819 11:36:27.394278    1485 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-wqcnn" secret="" err="secret \"gcp-auth\" not found"
	Aug 19 11:36:35 addons-288312 kubelet[1485]: I0819 11:36:35.393807    1485 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-6fb4cdfc84-2w4jh" secret="" err="secret \"gcp-auth\" not found"
	Aug 19 11:36:41 addons-288312 kubelet[1485]: I0819 11:36:41.394444    1485 scope.go:117] "RemoveContainer" containerID="8828628086b9661f99e6d6c803ba1237ffbf98d709abe38e78c1845a491893c5"
	Aug 19 11:36:41 addons-288312 kubelet[1485]: E0819 11:36:41.394603    1485 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=gadget pod=gadget-7j5j6_gadget(d8a09e01-a318-4543-8ef9-09bd8e0603da)\"" pod="gadget/gadget-7j5j6" podUID="d8a09e01-a318-4543-8ef9-09bd8e0603da"
	Aug 19 11:36:42 addons-288312 kubelet[1485]: I0819 11:36:42.393908    1485 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/nvidia-device-plugin-daemonset-892v8" secret="" err="secret \"gcp-auth\" not found"
	Aug 19 11:36:55 addons-288312 kubelet[1485]: I0819 11:36:55.393526    1485 scope.go:117] "RemoveContainer" containerID="8828628086b9661f99e6d6c803ba1237ffbf98d709abe38e78c1845a491893c5"
	Aug 19 11:36:55 addons-288312 kubelet[1485]: E0819 11:36:55.394213    1485 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=gadget pod=gadget-7j5j6_gadget(d8a09e01-a318-4543-8ef9-09bd8e0603da)\"" pod="gadget/gadget-7j5j6" podUID="d8a09e01-a318-4543-8ef9-09bd8e0603da"
	Aug 19 11:37:09 addons-288312 kubelet[1485]: I0819 11:37:09.396340    1485 scope.go:117] "RemoveContainer" containerID="8828628086b9661f99e6d6c803ba1237ffbf98d709abe38e78c1845a491893c5"
	Aug 19 11:37:09 addons-288312 kubelet[1485]: E0819 11:37:09.396987    1485 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=gadget pod=gadget-7j5j6_gadget(d8a09e01-a318-4543-8ef9-09bd8e0603da)\"" pod="gadget/gadget-7j5j6" podUID="d8a09e01-a318-4543-8ef9-09bd8e0603da"
	Aug 19 11:37:24 addons-288312 kubelet[1485]: I0819 11:37:24.393405    1485 scope.go:117] "RemoveContainer" containerID="8828628086b9661f99e6d6c803ba1237ffbf98d709abe38e78c1845a491893c5"
	Aug 19 11:37:24 addons-288312 kubelet[1485]: E0819 11:37:24.393611    1485 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=gadget pod=gadget-7j5j6_gadget(d8a09e01-a318-4543-8ef9-09bd8e0603da)\"" pod="gadget/gadget-7j5j6" podUID="d8a09e01-a318-4543-8ef9-09bd8e0603da"
	Aug 19 11:37:33 addons-288312 kubelet[1485]: I0819 11:37:33.393881    1485 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-wqcnn" secret="" err="secret \"gcp-auth\" not found"
	Aug 19 11:37:36 addons-288312 kubelet[1485]: I0819 11:37:36.393904    1485 scope.go:117] "RemoveContainer" containerID="8828628086b9661f99e6d6c803ba1237ffbf98d709abe38e78c1845a491893c5"
	Aug 19 11:37:36 addons-288312 kubelet[1485]: E0819 11:37:36.394607    1485 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=gadget pod=gadget-7j5j6_gadget(d8a09e01-a318-4543-8ef9-09bd8e0603da)\"" pod="gadget/gadget-7j5j6" podUID="d8a09e01-a318-4543-8ef9-09bd8e0603da"
	Aug 19 11:37:39 addons-288312 kubelet[1485]: I0819 11:37:39.395234    1485 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-6fb4cdfc84-2w4jh" secret="" err="secret \"gcp-auth\" not found"
	Aug 19 11:37:49 addons-288312 kubelet[1485]: I0819 11:37:49.394345    1485 scope.go:117] "RemoveContainer" containerID="8828628086b9661f99e6d6c803ba1237ffbf98d709abe38e78c1845a491893c5"
	Aug 19 11:37:49 addons-288312 kubelet[1485]: E0819 11:37:49.394537    1485 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=gadget pod=gadget-7j5j6_gadget(d8a09e01-a318-4543-8ef9-09bd8e0603da)\"" pod="gadget/gadget-7j5j6" podUID="d8a09e01-a318-4543-8ef9-09bd8e0603da"
	Aug 19 11:38:00 addons-288312 kubelet[1485]: I0819 11:38:00.394453    1485 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/nvidia-device-plugin-daemonset-892v8" secret="" err="secret \"gcp-auth\" not found"
	Aug 19 11:38:04 addons-288312 kubelet[1485]: I0819 11:38:04.393927    1485 scope.go:117] "RemoveContainer" containerID="8828628086b9661f99e6d6c803ba1237ffbf98d709abe38e78c1845a491893c5"
	Aug 19 11:38:04 addons-288312 kubelet[1485]: E0819 11:38:04.394122    1485 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=gadget pod=gadget-7j5j6_gadget(d8a09e01-a318-4543-8ef9-09bd8e0603da)\"" pod="gadget/gadget-7j5j6" podUID="d8a09e01-a318-4543-8ef9-09bd8e0603da"
	Aug 19 11:38:18 addons-288312 kubelet[1485]: I0819 11:38:18.394298    1485 scope.go:117] "RemoveContainer" containerID="8828628086b9661f99e6d6c803ba1237ffbf98d709abe38e78c1845a491893c5"
	Aug 19 11:38:18 addons-288312 kubelet[1485]: E0819 11:38:18.394526    1485 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=gadget pod=gadget-7j5j6_gadget(d8a09e01-a318-4543-8ef9-09bd8e0603da)\"" pod="gadget/gadget-7j5j6" podUID="d8a09e01-a318-4543-8ef9-09bd8e0603da"
	
	
	==> storage-provisioner [0e8d2510b90b36be1dce6edf90d3a83c3242d7fda14f366b2e188fdf39e2380d] <==
	I0819 11:32:29.865916       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0819 11:32:29.900460       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0819 11:32:29.900504       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0819 11:32:29.911506       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0819 11:32:29.919182       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-288312_4a6b40ac-57e7-4c42-8dfe-fffd099717d9!
	I0819 11:32:29.927115       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"f7a594d9-d422-4223-8a17-4cbf041ed338", APIVersion:"v1", ResourceVersion:"528", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-288312_4a6b40ac-57e7-4c42-8dfe-fffd099717d9 became leader
	I0819 11:32:30.019653       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-288312_4a6b40ac-57e7-4c42-8dfe-fffd099717d9!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-288312 -n addons-288312
helpers_test.go:261: (dbg) Run:  kubectl --context addons-288312 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: ingress-nginx-admission-create-r2nlk ingress-nginx-admission-patch-txtdn test-job-nginx-0
helpers_test.go:274: ======> post-mortem[TestAddons/serial/Volcano]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-288312 describe pod ingress-nginx-admission-create-r2nlk ingress-nginx-admission-patch-txtdn test-job-nginx-0
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-288312 describe pod ingress-nginx-admission-create-r2nlk ingress-nginx-admission-patch-txtdn test-job-nginx-0: exit status 1 (90.758717ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-r2nlk" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-txtdn" not found
	Error from server (NotFound): pods "test-job-nginx-0" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context addons-288312 describe pod ingress-nginx-admission-create-r2nlk ingress-nginx-admission-patch-txtdn test-job-nginx-0: exit status 1
--- FAIL: TestAddons/serial/Volcano (201.76s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (375.74s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-091610 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.20.0
E0819 12:20:07.265644  299191 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19476-293809/.minikube/profiles/addons-288312/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p old-k8s-version-091610 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.20.0: exit status 102 (6m12.04981384s)

                                                
                                                
-- stdout --
	* [old-k8s-version-091610] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19476
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19476-293809/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19476-293809/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.31.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.0
	* Using the docker driver based on existing profile
	* Starting "old-k8s-version-091610" primary control-plane node in "old-k8s-version-091610" cluster
	* Pulling base image v0.0.44-1723740748-19452 ...
	* Restarting existing docker container for "old-k8s-version-091610" ...
	* Preparing Kubernetes v1.20.0 on containerd 1.7.20 ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	  - Using image fake.domain/registry.k8s.io/echoserver:1.4
	  - Using image docker.io/kubernetesui/dashboard:v2.7.0
	  - Using image registry.k8s.io/echoserver:1.4
	* Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p old-k8s-version-091610 addons enable metrics-server
	
	* Enabled addons: metrics-server, storage-provisioner, dashboard, default-storageclass
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 12:19:24.978785  501046 out.go:345] Setting OutFile to fd 1 ...
	I0819 12:19:24.979190  501046 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 12:19:24.979199  501046 out.go:358] Setting ErrFile to fd 2...
	I0819 12:19:24.979204  501046 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 12:19:24.979453  501046 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19476-293809/.minikube/bin
	I0819 12:19:24.979833  501046 out.go:352] Setting JSON to false
	I0819 12:19:24.980758  501046 start.go:129] hostinfo: {"hostname":"ip-172-31-30-239","uptime":7312,"bootTime":1724062653,"procs":207,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I0819 12:19:24.980830  501046 start.go:139] virtualization:  
	I0819 12:19:24.983206  501046 out.go:177] * [old-k8s-version-091610] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	I0819 12:19:24.985595  501046 out.go:177]   - MINIKUBE_LOCATION=19476
	I0819 12:19:24.985663  501046 notify.go:220] Checking for updates...
	I0819 12:19:24.990441  501046 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0819 12:19:24.992323  501046 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19476-293809/kubeconfig
	I0819 12:19:24.994088  501046 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19476-293809/.minikube
	I0819 12:19:24.995968  501046 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0819 12:19:24.997588  501046 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0819 12:19:24.999773  501046 config.go:182] Loaded profile config "old-k8s-version-091610": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
	I0819 12:19:25.002587  501046 out.go:177] * Kubernetes 1.31.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.0
	I0819 12:19:25.004182  501046 driver.go:392] Setting default libvirt URI to qemu:///system
	I0819 12:19:25.058007  501046 docker.go:123] docker version: linux-27.1.2:Docker Engine - Community
	I0819 12:19:25.058205  501046 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0819 12:19:25.152454  501046 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:49 OomKillDisable:true NGoroutines:68 SystemTime:2024-08-19 12:19:25.138165191 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1067-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214900736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:27.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8fc6bcff51318944179630522a095cc9dbf9f353 Expected:8fc6bcff51318944179630522a095cc9dbf9f353} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1]] Warnings:<nil>}}
	I0819 12:19:25.152567  501046 docker.go:307] overlay module found
	I0819 12:19:25.154954  501046 out.go:177] * Using the docker driver based on existing profile
	I0819 12:19:25.156904  501046 start.go:297] selected driver: docker
	I0819 12:19:25.156925  501046 start.go:901] validating driver "docker" against &{Name:old-k8s-version-091610 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-091610 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 12:19:25.157033  501046 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0819 12:19:25.157973  501046 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0819 12:19:25.247164  501046 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:49 OomKillDisable:true NGoroutines:68 SystemTime:2024-08-19 12:19:25.234709512 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1067-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214900736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:27.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8fc6bcff51318944179630522a095cc9dbf9f353 Expected:8fc6bcff51318944179630522a095cc9dbf9f353} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1]] Warnings:<nil>}}
	I0819 12:19:25.247567  501046 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0819 12:19:25.247611  501046 cni.go:84] Creating CNI manager for ""
	I0819 12:19:25.247669  501046 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0819 12:19:25.247730  501046 start.go:340] cluster config:
	{Name:old-k8s-version-091610 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-091610 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:do
cker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 12:19:25.251247  501046 out.go:177] * Starting "old-k8s-version-091610" primary control-plane node in "old-k8s-version-091610" cluster
	I0819 12:19:25.253208  501046 cache.go:121] Beginning downloading kic base image for docker with containerd
	I0819 12:19:25.255734  501046 out.go:177] * Pulling base image v0.0.44-1723740748-19452 ...
	I0819 12:19:25.257612  501046 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime containerd
	I0819 12:19:25.257672  501046 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19476-293809/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4
	I0819 12:19:25.257685  501046 cache.go:56] Caching tarball of preloaded images
	I0819 12:19:25.257786  501046 preload.go:172] Found /home/jenkins/minikube-integration/19476-293809/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I0819 12:19:25.257799  501046 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on containerd
	I0819 12:19:25.257956  501046 profile.go:143] Saving config to /home/jenkins/minikube-integration/19476-293809/.minikube/profiles/old-k8s-version-091610/config.json ...
	I0819 12:19:25.258058  501046 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d in local docker daemon
	W0819 12:19:25.279313  501046 image.go:95] image gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d is of wrong architecture
	I0819 12:19:25.279336  501046 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d to local cache
	I0819 12:19:25.279412  501046 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d in local cache directory
	I0819 12:19:25.279429  501046 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d in local cache directory, skipping pull
	I0819 12:19:25.279439  501046 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d exists in cache, skipping pull
	I0819 12:19:25.279447  501046 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d as a tarball
	I0819 12:19:25.279453  501046 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d from local cache
	I0819 12:19:25.429578  501046 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d from cached tarball
	I0819 12:19:25.429612  501046 cache.go:194] Successfully downloaded all kic artifacts
	I0819 12:19:25.429657  501046 start.go:360] acquireMachinesLock for old-k8s-version-091610: {Name:mk6c13055e5fe32b288f1f3d7096b34ca2f392b1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 12:19:25.429725  501046 start.go:364] duration metric: took 45.471µs to acquireMachinesLock for "old-k8s-version-091610"
	I0819 12:19:25.429744  501046 start.go:96] Skipping create...Using existing machine configuration
	I0819 12:19:25.429751  501046 fix.go:54] fixHost starting: 
	I0819 12:19:25.430038  501046 cli_runner.go:164] Run: docker container inspect old-k8s-version-091610 --format={{.State.Status}}
	I0819 12:19:25.451905  501046 fix.go:112] recreateIfNeeded on old-k8s-version-091610: state=Stopped err=<nil>
	W0819 12:19:25.451940  501046 fix.go:138] unexpected machine state, will restart: <nil>
	I0819 12:19:25.454253  501046 out.go:177] * Restarting existing docker container for "old-k8s-version-091610" ...
	I0819 12:19:25.455968  501046 cli_runner.go:164] Run: docker start old-k8s-version-091610
	I0819 12:19:25.820260  501046 cli_runner.go:164] Run: docker container inspect old-k8s-version-091610 --format={{.State.Status}}
	I0819 12:19:25.852674  501046 kic.go:430] container "old-k8s-version-091610" state is running.
	I0819 12:19:25.853065  501046 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-091610
	I0819 12:19:25.880891  501046 profile.go:143] Saving config to /home/jenkins/minikube-integration/19476-293809/.minikube/profiles/old-k8s-version-091610/config.json ...
	I0819 12:19:25.881110  501046 machine.go:93] provisionDockerMachine start ...
	I0819 12:19:25.881166  501046 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-091610
	I0819 12:19:25.904802  501046 main.go:141] libmachine: Using SSH client type: native
	I0819 12:19:25.905087  501046 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e49d0] 0x3e7230 <nil>  [] 0s} 127.0.0.1 33433 <nil> <nil>}
	I0819 12:19:25.905098  501046 main.go:141] libmachine: About to run SSH command:
	hostname
	I0819 12:19:25.905784  501046 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I0819 12:19:29.046869  501046 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-091610
	
	I0819 12:19:29.046987  501046 ubuntu.go:169] provisioning hostname "old-k8s-version-091610"
	I0819 12:19:29.047072  501046 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-091610
	I0819 12:19:29.070243  501046 main.go:141] libmachine: Using SSH client type: native
	I0819 12:19:29.070494  501046 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e49d0] 0x3e7230 <nil>  [] 0s} 127.0.0.1 33433 <nil> <nil>}
	I0819 12:19:29.070505  501046 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-091610 && echo "old-k8s-version-091610" | sudo tee /etc/hostname
	I0819 12:19:29.223954  501046 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-091610
	
	I0819 12:19:29.224085  501046 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-091610
	I0819 12:19:29.247197  501046 main.go:141] libmachine: Using SSH client type: native
	I0819 12:19:29.247462  501046 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e49d0] 0x3e7230 <nil>  [] 0s} 127.0.0.1 33433 <nil> <nil>}
	I0819 12:19:29.247479  501046 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-091610' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-091610/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-091610' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0819 12:19:29.388194  501046 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0819 12:19:29.388232  501046 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19476-293809/.minikube CaCertPath:/home/jenkins/minikube-integration/19476-293809/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19476-293809/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19476-293809/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19476-293809/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19476-293809/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19476-293809/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19476-293809/.minikube}
	I0819 12:19:29.388293  501046 ubuntu.go:177] setting up certificates
	I0819 12:19:29.388304  501046 provision.go:84] configureAuth start
	I0819 12:19:29.388414  501046 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-091610
	I0819 12:19:29.411396  501046 provision.go:143] copyHostCerts
	I0819 12:19:29.411461  501046 exec_runner.go:144] found /home/jenkins/minikube-integration/19476-293809/.minikube/ca.pem, removing ...
	I0819 12:19:29.411476  501046 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19476-293809/.minikube/ca.pem
	I0819 12:19:29.411566  501046 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19476-293809/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19476-293809/.minikube/ca.pem (1082 bytes)
	I0819 12:19:29.411687  501046 exec_runner.go:144] found /home/jenkins/minikube-integration/19476-293809/.minikube/cert.pem, removing ...
	I0819 12:19:29.411698  501046 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19476-293809/.minikube/cert.pem
	I0819 12:19:29.411734  501046 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19476-293809/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19476-293809/.minikube/cert.pem (1123 bytes)
	I0819 12:19:29.411817  501046 exec_runner.go:144] found /home/jenkins/minikube-integration/19476-293809/.minikube/key.pem, removing ...
	I0819 12:19:29.411826  501046 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19476-293809/.minikube/key.pem
	I0819 12:19:29.411857  501046 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19476-293809/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19476-293809/.minikube/key.pem (1675 bytes)
	I0819 12:19:29.411929  501046 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19476-293809/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19476-293809/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19476-293809/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-091610 san=[127.0.0.1 192.168.85.2 localhost minikube old-k8s-version-091610]
	I0819 12:19:29.548562  501046 provision.go:177] copyRemoteCerts
	I0819 12:19:29.548686  501046 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0819 12:19:29.548747  501046 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-091610
	I0819 12:19:29.571341  501046 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33433 SSHKeyPath:/home/jenkins/minikube-integration/19476-293809/.minikube/machines/old-k8s-version-091610/id_rsa Username:docker}
	I0819 12:19:29.668769  501046 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19476-293809/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0819 12:19:29.699182  501046 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19476-293809/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0819 12:19:29.727570  501046 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19476-293809/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0819 12:19:29.756692  501046 provision.go:87] duration metric: took 368.368402ms to configureAuth
	I0819 12:19:29.756717  501046 ubuntu.go:193] setting minikube options for container-runtime
	I0819 12:19:29.756960  501046 config.go:182] Loaded profile config "old-k8s-version-091610": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
	I0819 12:19:29.756983  501046 machine.go:96] duration metric: took 3.875858109s to provisionDockerMachine
	I0819 12:19:29.757003  501046 start.go:293] postStartSetup for "old-k8s-version-091610" (driver="docker")
	I0819 12:19:29.757017  501046 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0819 12:19:29.757091  501046 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0819 12:19:29.757161  501046 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-091610
	I0819 12:19:29.779733  501046 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33433 SSHKeyPath:/home/jenkins/minikube-integration/19476-293809/.minikube/machines/old-k8s-version-091610/id_rsa Username:docker}
	I0819 12:19:29.885352  501046 ssh_runner.go:195] Run: cat /etc/os-release
	I0819 12:19:29.889691  501046 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0819 12:19:29.889738  501046 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0819 12:19:29.889752  501046 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0819 12:19:29.889760  501046 info.go:137] Remote host: Ubuntu 22.04.4 LTS
	I0819 12:19:29.889775  501046 filesync.go:126] Scanning /home/jenkins/minikube-integration/19476-293809/.minikube/addons for local assets ...
	I0819 12:19:29.889831  501046 filesync.go:126] Scanning /home/jenkins/minikube-integration/19476-293809/.minikube/files for local assets ...
	I0819 12:19:29.889916  501046 filesync.go:149] local asset: /home/jenkins/minikube-integration/19476-293809/.minikube/files/etc/ssl/certs/2991912.pem -> 2991912.pem in /etc/ssl/certs
	I0819 12:19:29.890023  501046 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0819 12:19:29.900071  501046 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19476-293809/.minikube/files/etc/ssl/certs/2991912.pem --> /etc/ssl/certs/2991912.pem (1708 bytes)
	I0819 12:19:29.930249  501046 start.go:296] duration metric: took 173.227149ms for postStartSetup
	I0819 12:19:29.930355  501046 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0819 12:19:29.930425  501046 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-091610
	I0819 12:19:29.950563  501046 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33433 SSHKeyPath:/home/jenkins/minikube-integration/19476-293809/.minikube/machines/old-k8s-version-091610/id_rsa Username:docker}
	I0819 12:19:30.085961  501046 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0819 12:19:30.100409  501046 fix.go:56] duration metric: took 4.670646848s for fixHost
	I0819 12:19:30.100433  501046 start.go:83] releasing machines lock for "old-k8s-version-091610", held for 4.670699081s
	I0819 12:19:30.100520  501046 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-091610
	I0819 12:19:30.128358  501046 ssh_runner.go:195] Run: cat /version.json
	I0819 12:19:30.128417  501046 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-091610
	I0819 12:19:30.130011  501046 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0819 12:19:30.130122  501046 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-091610
	I0819 12:19:30.166305  501046 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33433 SSHKeyPath:/home/jenkins/minikube-integration/19476-293809/.minikube/machines/old-k8s-version-091610/id_rsa Username:docker}
	I0819 12:19:30.200157  501046 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33433 SSHKeyPath:/home/jenkins/minikube-integration/19476-293809/.minikube/machines/old-k8s-version-091610/id_rsa Username:docker}
	I0819 12:19:30.279570  501046 ssh_runner.go:195] Run: systemctl --version
	I0819 12:19:30.410109  501046 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0819 12:19:30.415388  501046 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0819 12:19:30.440483  501046 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0819 12:19:30.440563  501046 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0819 12:19:30.450274  501046 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0819 12:19:30.450297  501046 start.go:495] detecting cgroup driver to use...
	I0819 12:19:30.450329  501046 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0819 12:19:30.450391  501046 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0819 12:19:30.465898  501046 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0819 12:19:30.479952  501046 docker.go:217] disabling cri-docker service (if available) ...
	I0819 12:19:30.480057  501046 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0819 12:19:30.494861  501046 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0819 12:19:30.508182  501046 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0819 12:19:30.621778  501046 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0819 12:19:30.732010  501046 docker.go:233] disabling docker service ...
	I0819 12:19:30.732128  501046 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0819 12:19:30.750394  501046 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0819 12:19:30.766259  501046 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0819 12:19:30.882943  501046 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0819 12:19:30.981373  501046 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0819 12:19:30.999092  501046 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0819 12:19:31.030768  501046 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.2"|' /etc/containerd/config.toml"
	I0819 12:19:31.045902  501046 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0819 12:19:31.060572  501046 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0819 12:19:31.060701  501046 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0819 12:19:31.072523  501046 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0819 12:19:31.083695  501046 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0819 12:19:31.095321  501046 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0819 12:19:31.109003  501046 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0819 12:19:31.123569  501046 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0819 12:19:31.139317  501046 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0819 12:19:31.152588  501046 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0819 12:19:31.163848  501046 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 12:19:31.274007  501046 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0819 12:19:31.572648  501046 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I0819 12:19:31.572791  501046 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0819 12:19:31.578482  501046 start.go:563] Will wait 60s for crictl version
	I0819 12:19:31.578648  501046 ssh_runner.go:195] Run: which crictl
	I0819 12:19:31.585018  501046 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0819 12:19:31.641587  501046 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.7.20
	RuntimeApiVersion:  v1
	I0819 12:19:31.641709  501046 ssh_runner.go:195] Run: containerd --version
	I0819 12:19:31.667209  501046 ssh_runner.go:195] Run: containerd --version
	I0819 12:19:31.693763  501046 out.go:177] * Preparing Kubernetes v1.20.0 on containerd 1.7.20 ...
	I0819 12:19:31.695661  501046 cli_runner.go:164] Run: docker network inspect old-k8s-version-091610 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0819 12:19:31.723910  501046 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I0819 12:19:31.727617  501046 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0819 12:19:31.745564  501046 kubeadm.go:883] updating cluster {Name:old-k8s-version-091610 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-091610 Namespace:default APIServerHAVIP: APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/
home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0819 12:19:31.745701  501046 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime containerd
	I0819 12:19:31.745765  501046 ssh_runner.go:195] Run: sudo crictl images --output json
	I0819 12:19:31.800242  501046 containerd.go:627] all images are preloaded for containerd runtime.
	I0819 12:19:31.800268  501046 containerd.go:534] Images already preloaded, skipping extraction
	I0819 12:19:31.800331  501046 ssh_runner.go:195] Run: sudo crictl images --output json
	I0819 12:19:31.854176  501046 containerd.go:627] all images are preloaded for containerd runtime.
	I0819 12:19:31.854219  501046 cache_images.go:84] Images are preloaded, skipping loading
	I0819 12:19:31.854263  501046 kubeadm.go:934] updating node { 192.168.85.2 8443 v1.20.0 containerd true true} ...
	I0819 12:19:31.854465  501046 kubeadm.go:946] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=old-k8s-version-091610 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-091610 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0819 12:19:31.854613  501046 ssh_runner.go:195] Run: sudo crictl info
	I0819 12:19:31.927027  501046 cni.go:84] Creating CNI manager for ""
	I0819 12:19:31.927073  501046 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0819 12:19:31.927086  501046 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0819 12:19:31.927169  501046 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-091610 NodeName:old-k8s-version-091610 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt
StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0819 12:19:31.927486  501046 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "old-k8s-version-091610"
	  kubeletExtraArgs:
	    node-ip: 192.168.85.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0819 12:19:31.927660  501046 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0819 12:19:31.943520  501046 binaries.go:44] Found k8s binaries, skipping transfer
	I0819 12:19:31.943633  501046 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0819 12:19:31.956274  501046 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (442 bytes)
	I0819 12:19:31.977532  501046 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0819 12:19:31.999113  501046 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2125 bytes)
	I0819 12:19:32.024468  501046 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I0819 12:19:32.028448  501046 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0819 12:19:32.041717  501046 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 12:19:32.144089  501046 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0819 12:19:32.161532  501046 certs.go:68] Setting up /home/jenkins/minikube-integration/19476-293809/.minikube/profiles/old-k8s-version-091610 for IP: 192.168.85.2
	I0819 12:19:32.161598  501046 certs.go:194] generating shared ca certs ...
	I0819 12:19:32.161641  501046 certs.go:226] acquiring lock for ca certs: {Name:mkf168e715338554e93ce93584b85aca19a124a7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 12:19:32.161821  501046 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19476-293809/.minikube/ca.key
	I0819 12:19:32.161953  501046 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19476-293809/.minikube/proxy-client-ca.key
	I0819 12:19:32.161981  501046 certs.go:256] generating profile certs ...
	I0819 12:19:32.162122  501046 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19476-293809/.minikube/profiles/old-k8s-version-091610/client.key
	I0819 12:19:32.162246  501046 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19476-293809/.minikube/profiles/old-k8s-version-091610/apiserver.key.73008fe7
	I0819 12:19:32.162330  501046 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19476-293809/.minikube/profiles/old-k8s-version-091610/proxy-client.key
	I0819 12:19:32.162497  501046 certs.go:484] found cert: /home/jenkins/minikube-integration/19476-293809/.minikube/certs/299191.pem (1338 bytes)
	W0819 12:19:32.162557  501046 certs.go:480] ignoring /home/jenkins/minikube-integration/19476-293809/.minikube/certs/299191_empty.pem, impossibly tiny 0 bytes
	I0819 12:19:32.162580  501046 certs.go:484] found cert: /home/jenkins/minikube-integration/19476-293809/.minikube/certs/ca-key.pem (1675 bytes)
	I0819 12:19:32.162636  501046 certs.go:484] found cert: /home/jenkins/minikube-integration/19476-293809/.minikube/certs/ca.pem (1082 bytes)
	I0819 12:19:32.162695  501046 certs.go:484] found cert: /home/jenkins/minikube-integration/19476-293809/.minikube/certs/cert.pem (1123 bytes)
	I0819 12:19:32.162748  501046 certs.go:484] found cert: /home/jenkins/minikube-integration/19476-293809/.minikube/certs/key.pem (1675 bytes)
	I0819 12:19:32.162825  501046 certs.go:484] found cert: /home/jenkins/minikube-integration/19476-293809/.minikube/files/etc/ssl/certs/2991912.pem (1708 bytes)
	I0819 12:19:32.163706  501046 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19476-293809/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0819 12:19:32.230343  501046 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19476-293809/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0819 12:19:32.268275  501046 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19476-293809/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0819 12:19:32.312417  501046 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19476-293809/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0819 12:19:32.370979  501046 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19476-293809/.minikube/profiles/old-k8s-version-091610/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0819 12:19:32.398144  501046 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19476-293809/.minikube/profiles/old-k8s-version-091610/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0819 12:19:32.422981  501046 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19476-293809/.minikube/profiles/old-k8s-version-091610/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0819 12:19:32.447653  501046 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19476-293809/.minikube/profiles/old-k8s-version-091610/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0819 12:19:32.474748  501046 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19476-293809/.minikube/certs/299191.pem --> /usr/share/ca-certificates/299191.pem (1338 bytes)
	I0819 12:19:32.499582  501046 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19476-293809/.minikube/files/etc/ssl/certs/2991912.pem --> /usr/share/ca-certificates/2991912.pem (1708 bytes)
	I0819 12:19:32.524668  501046 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19476-293809/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0819 12:19:32.551678  501046 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0819 12:19:32.570272  501046 ssh_runner.go:195] Run: openssl version
	I0819 12:19:32.576476  501046 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/299191.pem && ln -fs /usr/share/ca-certificates/299191.pem /etc/ssl/certs/299191.pem"
	I0819 12:19:32.585831  501046 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/299191.pem
	I0819 12:19:32.589918  501046 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 19 11:42 /usr/share/ca-certificates/299191.pem
	I0819 12:19:32.590019  501046 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/299191.pem
	I0819 12:19:32.597185  501046 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/299191.pem /etc/ssl/certs/51391683.0"
	I0819 12:19:32.606224  501046 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2991912.pem && ln -fs /usr/share/ca-certificates/2991912.pem /etc/ssl/certs/2991912.pem"
	I0819 12:19:32.615393  501046 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2991912.pem
	I0819 12:19:32.619431  501046 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 19 11:42 /usr/share/ca-certificates/2991912.pem
	I0819 12:19:32.619542  501046 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2991912.pem
	I0819 12:19:32.627134  501046 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2991912.pem /etc/ssl/certs/3ec20f2e.0"
	I0819 12:19:32.637516  501046 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0819 12:19:32.647918  501046 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0819 12:19:32.652187  501046 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 19 11:32 /usr/share/ca-certificates/minikubeCA.pem
	I0819 12:19:32.652266  501046 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0819 12:19:32.659941  501046 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0819 12:19:32.669126  501046 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0819 12:19:32.673331  501046 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0819 12:19:32.681139  501046 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0819 12:19:32.688959  501046 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0819 12:19:32.696500  501046 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0819 12:19:32.704001  501046 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0819 12:19:32.711487  501046 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0819 12:19:32.719037  501046 kubeadm.go:392] StartCluster: {Name:old-k8s-version-091610 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-091610 Namespace:default APIServerHAVIP: APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/hom
e/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 12:19:32.719191  501046 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0819 12:19:32.719282  501046 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0819 12:19:32.774797  501046 cri.go:89] found id: "52772681d7f9b14520c48d8c42d715a82c430226dad2db97bca20cde5180797d"
	I0819 12:19:32.774908  501046 cri.go:89] found id: "ff792901aeab0bc28facb26abfa78879ae3a7f0e523d2f2d7a83d2138d80c10b"
	I0819 12:19:32.774938  501046 cri.go:89] found id: "7a63c07299e71c1920b148c4c2cd68ce0fc64d5359eb08ebc374e073275d266b"
	I0819 12:19:32.774958  501046 cri.go:89] found id: "495863fa417577517a6659a9363e132d473fde25375c55ba292884732c5b5cc6"
	I0819 12:19:32.774969  501046 cri.go:89] found id: "b8c6ba6c65d67f62a14421151c3013537c37cfcf1bc0b08d90d27bda4241f566"
	I0819 12:19:32.774974  501046 cri.go:89] found id: "f96bb26d0d9fd54167ea4eedbda32851e7e6ed986c5d18edddbfb9d015c80aa8"
	I0819 12:19:32.774977  501046 cri.go:89] found id: "448906379c25acacfbe73890ef79d2faf13a76a8f18880099fda6187c53b0152"
	I0819 12:19:32.774980  501046 cri.go:89] found id: "1ef419f5f067970c35ce55d03d0e7a36fdebdd452881d8f478b7e537af217a32"
	I0819 12:19:32.774983  501046 cri.go:89] found id: ""
	I0819 12:19:32.775050  501046 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I0819 12:19:32.789561  501046 cri.go:116] JSON = null
	W0819 12:19:32.789659  501046 kubeadm.go:399] unpause failed: list paused: list returned 0 containers, but ps returned 8
	I0819 12:19:32.789752  501046 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0819 12:19:32.801098  501046 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0819 12:19:32.801165  501046 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0819 12:19:32.801253  501046 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0819 12:19:32.817339  501046 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0819 12:19:32.817862  501046 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-091610" does not appear in /home/jenkins/minikube-integration/19476-293809/kubeconfig
	I0819 12:19:32.818048  501046 kubeconfig.go:62] /home/jenkins/minikube-integration/19476-293809/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-091610" cluster setting kubeconfig missing "old-k8s-version-091610" context setting]
	I0819 12:19:32.818423  501046 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19476-293809/kubeconfig: {Name:mk83cf1ee61353d940dd326434ad6e97ed986eab Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 12:19:32.827668  501046 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0819 12:19:32.837017  501046 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.85.2
	I0819 12:19:32.837048  501046 kubeadm.go:597] duration metric: took 35.864592ms to restartPrimaryControlPlane
	I0819 12:19:32.837058  501046 kubeadm.go:394] duration metric: took 118.031051ms to StartCluster
	I0819 12:19:32.837085  501046 settings.go:142] acquiring lock: {Name:mkc4435b6c8d62b9d001c06e85eb76d8e377373c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 12:19:32.837163  501046 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19476-293809/kubeconfig
	I0819 12:19:32.837826  501046 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19476-293809/kubeconfig: {Name:mk83cf1ee61353d940dd326434ad6e97ed986eab Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 12:19:32.838047  501046 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0819 12:19:32.838282  501046 config.go:182] Loaded profile config "old-k8s-version-091610": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
	I0819 12:19:32.838359  501046 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0819 12:19:32.838497  501046 addons.go:69] Setting storage-provisioner=true in profile "old-k8s-version-091610"
	I0819 12:19:32.838555  501046 addons.go:234] Setting addon storage-provisioner=true in "old-k8s-version-091610"
	W0819 12:19:32.838580  501046 addons.go:243] addon storage-provisioner should already be in state true
	I0819 12:19:32.838630  501046 host.go:66] Checking if "old-k8s-version-091610" exists ...
	I0819 12:19:32.839176  501046 cli_runner.go:164] Run: docker container inspect old-k8s-version-091610 --format={{.State.Status}}
	I0819 12:19:32.839354  501046 addons.go:69] Setting default-storageclass=true in profile "old-k8s-version-091610"
	I0819 12:19:32.839415  501046 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-091610"
	I0819 12:19:32.839651  501046 addons.go:69] Setting metrics-server=true in profile "old-k8s-version-091610"
	I0819 12:19:32.839678  501046 addons.go:234] Setting addon metrics-server=true in "old-k8s-version-091610"
	W0819 12:19:32.839685  501046 addons.go:243] addon metrics-server should already be in state true
	I0819 12:19:32.839708  501046 host.go:66] Checking if "old-k8s-version-091610" exists ...
	I0819 12:19:32.840091  501046 cli_runner.go:164] Run: docker container inspect old-k8s-version-091610 --format={{.State.Status}}
	I0819 12:19:32.840433  501046 cli_runner.go:164] Run: docker container inspect old-k8s-version-091610 --format={{.State.Status}}
	I0819 12:19:32.843149  501046 addons.go:69] Setting dashboard=true in profile "old-k8s-version-091610"
	I0819 12:19:32.843208  501046 addons.go:234] Setting addon dashboard=true in "old-k8s-version-091610"
	W0819 12:19:32.843222  501046 addons.go:243] addon dashboard should already be in state true
	I0819 12:19:32.843253  501046 host.go:66] Checking if "old-k8s-version-091610" exists ...
	I0819 12:19:32.847701  501046 cli_runner.go:164] Run: docker container inspect old-k8s-version-091610 --format={{.State.Status}}
	I0819 12:19:32.847857  501046 out.go:177] * Verifying Kubernetes components...
	I0819 12:19:32.855277  501046 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 12:19:32.893213  501046 addons.go:234] Setting addon default-storageclass=true in "old-k8s-version-091610"
	W0819 12:19:32.893234  501046 addons.go:243] addon default-storageclass should already be in state true
	I0819 12:19:32.893260  501046 host.go:66] Checking if "old-k8s-version-091610" exists ...
	I0819 12:19:32.899124  501046 cli_runner.go:164] Run: docker container inspect old-k8s-version-091610 --format={{.State.Status}}
	I0819 12:19:32.915855  501046 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0819 12:19:32.918103  501046 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0819 12:19:32.918843  501046 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0819 12:19:32.918866  501046 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0819 12:19:32.918967  501046 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-091610
	I0819 12:19:32.919922  501046 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0819 12:19:32.919944  501046 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0819 12:19:32.920003  501046 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-091610
	I0819 12:19:32.944680  501046 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0819 12:19:32.946965  501046 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I0819 12:19:32.950995  501046 addons.go:431] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0819 12:19:32.951024  501046 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0819 12:19:32.951095  501046 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-091610
	I0819 12:19:32.951506  501046 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0819 12:19:32.951519  501046 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0819 12:19:32.951568  501046 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-091610
	I0819 12:19:32.995351  501046 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33433 SSHKeyPath:/home/jenkins/minikube-integration/19476-293809/.minikube/machines/old-k8s-version-091610/id_rsa Username:docker}
	I0819 12:19:33.015178  501046 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33433 SSHKeyPath:/home/jenkins/minikube-integration/19476-293809/.minikube/machines/old-k8s-version-091610/id_rsa Username:docker}
	I0819 12:19:33.028664  501046 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33433 SSHKeyPath:/home/jenkins/minikube-integration/19476-293809/.minikube/machines/old-k8s-version-091610/id_rsa Username:docker}
	I0819 12:19:33.031236  501046 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33433 SSHKeyPath:/home/jenkins/minikube-integration/19476-293809/.minikube/machines/old-k8s-version-091610/id_rsa Username:docker}
	I0819 12:19:33.071882  501046 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0819 12:19:33.125413  501046 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-091610" to be "Ready" ...
	I0819 12:19:33.182761  501046 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0819 12:19:33.230534  501046 addons.go:431] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0819 12:19:33.230673  501046 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0819 12:19:33.264273  501046 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0819 12:19:33.264346  501046 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0819 12:19:33.284518  501046 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0819 12:19:33.334275  501046 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0819 12:19:33.334356  501046 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0819 12:19:33.341960  501046 addons.go:431] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0819 12:19:33.342036  501046 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0819 12:19:33.392432  501046 addons.go:431] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0819 12:19:33.392527  501046 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0819 12:19:33.437636  501046 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0819 12:19:33.437741  501046 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0819 12:19:33.452841  501046 addons.go:431] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0819 12:19:33.452946  501046 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	W0819 12:19:33.475519  501046 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0819 12:19:33.475637  501046 retry.go:31] will retry after 137.405506ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0819 12:19:33.515649  501046 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0819 12:19:33.522848  501046 addons.go:431] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0819 12:19:33.523037  501046 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	W0819 12:19:33.591685  501046 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0819 12:19:33.591790  501046 retry.go:31] will retry after 279.990015ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0819 12:19:33.597560  501046 addons.go:431] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0819 12:19:33.597657  501046 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0819 12:19:33.614002  501046 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0819 12:19:33.686650  501046 addons.go:431] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0819 12:19:33.686739  501046 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0819 12:19:33.758737  501046 addons.go:431] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0819 12:19:33.758818  501046 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	W0819 12:19:33.804293  501046 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0819 12:19:33.804407  501046 retry.go:31] will retry after 265.061495ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0819 12:19:33.846068  501046 addons.go:431] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0819 12:19:33.846203  501046 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	W0819 12:19:33.847572  501046 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0819 12:19:33.847664  501046 retry.go:31] will retry after 540.885292ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0819 12:19:33.871890  501046 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0819 12:19:33.872014  501046 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0819 12:19:34.054710  501046 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0819 12:19:34.054748  501046 retry.go:31] will retry after 212.857605ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0819 12:19:34.070192  501046 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W0819 12:19:34.072316  501046 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0819 12:19:34.072353  501046 retry.go:31] will retry after 357.157926ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0819 12:19:34.173891  501046 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0819 12:19:34.173931  501046 retry.go:31] will retry after 430.466387ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0819 12:19:34.268192  501046 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0819 12:19:34.366532  501046 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0819 12:19:34.366570  501046 retry.go:31] will retry after 780.880635ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0819 12:19:34.388828  501046 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0819 12:19:34.430207  501046 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0819 12:19:34.563525  501046 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0819 12:19:34.563573  501046 retry.go:31] will retry after 397.188583ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0819 12:19:34.604659  501046 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W0819 12:19:34.636095  501046 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0819 12:19:34.636128  501046 retry.go:31] will retry after 432.643581ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0819 12:19:34.734657  501046 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0819 12:19:34.734691  501046 retry.go:31] will retry after 558.921681ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0819 12:19:34.961252  501046 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0819 12:19:35.069632  501046 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0819 12:19:35.126988  501046 node_ready.go:53] error getting node "old-k8s-version-091610": Get "https://192.168.85.2:8443/api/v1/nodes/old-k8s-version-091610": dial tcp 192.168.85.2:8443: connect: connection refused
	I0819 12:19:35.148044  501046 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0819 12:19:35.231531  501046 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0819 12:19:35.231563  501046 retry.go:31] will retry after 1.206552804s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0819 12:19:35.293791  501046 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W0819 12:19:35.338355  501046 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0819 12:19:35.338392  501046 retry.go:31] will retry after 793.862878ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0819 12:19:35.382070  501046 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0819 12:19:35.382110  501046 retry.go:31] will retry after 842.964237ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0819 12:19:35.416627  501046 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0819 12:19:35.416673  501046 retry.go:31] will retry after 1.180278189s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0819 12:19:36.132402  501046 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0819 12:19:36.213396  501046 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0819 12:19:36.213430  501046 retry.go:31] will retry after 919.544577ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0819 12:19:36.225602  501046 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0819 12:19:36.303960  501046 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0819 12:19:36.303995  501046 retry.go:31] will retry after 1.357666565s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0819 12:19:36.438256  501046 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0819 12:19:36.514418  501046 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0819 12:19:36.514449  501046 retry.go:31] will retry after 1.083678205s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0819 12:19:36.597684  501046 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W0819 12:19:36.676421  501046 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0819 12:19:36.676452  501046 retry.go:31] will retry after 720.547492ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0819 12:19:37.133366  501046 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0819 12:19:37.213381  501046 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0819 12:19:37.213412  501046 retry.go:31] will retry after 1.304449122s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0819 12:19:37.397889  501046 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W0819 12:19:37.471061  501046 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0819 12:19:37.471092  501046 retry.go:31] will retry after 2.701956028s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0819 12:19:37.598356  501046 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0819 12:19:37.626193  501046 node_ready.go:53] error getting node "old-k8s-version-091610": Get "https://192.168.85.2:8443/api/v1/nodes/old-k8s-version-091610": dial tcp 192.168.85.2:8443: connect: connection refused
	I0819 12:19:37.662523  501046 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0819 12:19:37.676198  501046 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0819 12:19:37.676227  501046 retry.go:31] will retry after 1.795669914s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0819 12:19:37.741193  501046 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0819 12:19:37.741233  501046 retry.go:31] will retry after 2.788600785s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0819 12:19:38.518672  501046 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0819 12:19:38.597366  501046 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0819 12:19:38.597397  501046 retry.go:31] will retry after 2.027723227s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0819 12:19:39.472052  501046 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0819 12:19:39.540448  501046 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0819 12:19:39.540480  501046 retry.go:31] will retry after 3.548939672s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0819 12:19:40.127341  501046 node_ready.go:53] error getting node "old-k8s-version-091610": Get "https://192.168.85.2:8443/api/v1/nodes/old-k8s-version-091610": dial tcp 192.168.85.2:8443: connect: connection refused
	I0819 12:19:40.173974  501046 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W0819 12:19:40.248338  501046 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0819 12:19:40.248399  501046 retry.go:31] will retry after 3.342364066s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0819 12:19:40.530962  501046 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0819 12:19:40.607369  501046 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0819 12:19:40.607400  501046 retry.go:31] will retry after 4.102404137s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0819 12:19:40.625549  501046 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0819 12:19:40.740599  501046 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0819 12:19:40.740630  501046 retry.go:31] will retry after 1.47418963s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0819 12:19:42.215381  501046 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0819 12:19:43.089980  501046 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0819 12:19:43.591932  501046 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0819 12:19:44.710162  501046 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0819 12:19:48.965357  501046 node_ready.go:49] node "old-k8s-version-091610" has status "Ready":"True"
	I0819 12:19:48.965382  501046 node_ready.go:38] duration metric: took 15.839890261s for node "old-k8s-version-091610" to be "Ready" ...
	I0819 12:19:48.965392  501046 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0819 12:19:49.056315  501046 pod_ready.go:79] waiting up to 6m0s for pod "coredns-74ff55c5b-fgk64" in "kube-system" namespace to be "Ready" ...
	I0819 12:19:49.146085  501046 pod_ready.go:93] pod "coredns-74ff55c5b-fgk64" in "kube-system" namespace has status "Ready":"True"
	I0819 12:19:49.146163  501046 pod_ready.go:82] duration metric: took 89.729887ms for pod "coredns-74ff55c5b-fgk64" in "kube-system" namespace to be "Ready" ...
	I0819 12:19:49.146190  501046 pod_ready.go:79] waiting up to 6m0s for pod "etcd-old-k8s-version-091610" in "kube-system" namespace to be "Ready" ...
	I0819 12:19:49.166911  501046 pod_ready.go:93] pod "etcd-old-k8s-version-091610" in "kube-system" namespace has status "Ready":"True"
	I0819 12:19:49.166988  501046 pod_ready.go:82] duration metric: took 20.775622ms for pod "etcd-old-k8s-version-091610" in "kube-system" namespace to be "Ready" ...
	I0819 12:19:49.167018  501046 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-old-k8s-version-091610" in "kube-system" namespace to be "Ready" ...
	I0819 12:19:49.209350  501046 pod_ready.go:93] pod "kube-apiserver-old-k8s-version-091610" in "kube-system" namespace has status "Ready":"True"
	I0819 12:19:49.209430  501046 pod_ready.go:82] duration metric: took 42.390297ms for pod "kube-apiserver-old-k8s-version-091610" in "kube-system" namespace to be "Ready" ...
	I0819 12:19:49.209457  501046 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-old-k8s-version-091610" in "kube-system" namespace to be "Ready" ...
	I0819 12:19:50.253172  501046 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (8.037736647s)
	I0819 12:19:50.253488  501046 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (6.661526118s)
	I0819 12:19:50.253538  501046 addons.go:475] Verifying addon metrics-server=true in "old-k8s-version-091610"
	I0819 12:19:50.253632  501046 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: (5.543375322s)
	I0819 12:19:50.253883  501046 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: (7.163409712s)
	I0819 12:19:50.255434  501046 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p old-k8s-version-091610 addons enable metrics-server
	
	I0819 12:19:50.261428  501046 out.go:177] * Enabled addons: metrics-server, storage-provisioner, dashboard, default-storageclass
	I0819 12:19:50.264494  501046 addons.go:510] duration metric: took 17.426114116s for enable addons: enabled=[metrics-server storage-provisioner dashboard default-storageclass]
	I0819 12:19:51.216583  501046 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-091610" in "kube-system" namespace has status "Ready":"False"
	I0819 12:19:53.217173  501046 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-091610" in "kube-system" namespace has status "Ready":"False"
	I0819 12:19:55.217451  501046 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-091610" in "kube-system" namespace has status "Ready":"False"
	I0819 12:19:57.219284  501046 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-091610" in "kube-system" namespace has status "Ready":"False"
	I0819 12:19:59.718609  501046 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-091610" in "kube-system" namespace has status "Ready":"False"
	I0819 12:20:02.216803  501046 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-091610" in "kube-system" namespace has status "Ready":"False"
	I0819 12:20:04.220075  501046 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-091610" in "kube-system" namespace has status "Ready":"False"
	I0819 12:20:06.785923  501046 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-091610" in "kube-system" namespace has status "Ready":"False"
	I0819 12:20:09.220722  501046 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-091610" in "kube-system" namespace has status "Ready":"False"
	I0819 12:20:11.221213  501046 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-091610" in "kube-system" namespace has status "Ready":"False"
	I0819 12:20:13.716626  501046 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-091610" in "kube-system" namespace has status "Ready":"False"
	I0819 12:20:15.719468  501046 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-091610" in "kube-system" namespace has status "Ready":"False"
	I0819 12:20:18.222355  501046 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-091610" in "kube-system" namespace has status "Ready":"False"
	I0819 12:20:20.717989  501046 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-091610" in "kube-system" namespace has status "Ready":"False"
	I0819 12:20:23.216321  501046 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-091610" in "kube-system" namespace has status "Ready":"False"
	I0819 12:20:25.717301  501046 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-091610" in "kube-system" namespace has status "Ready":"False"
	I0819 12:20:28.216857  501046 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-091610" in "kube-system" namespace has status "Ready":"False"
	I0819 12:20:30.217367  501046 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-091610" in "kube-system" namespace has status "Ready":"False"
	I0819 12:20:32.716795  501046 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-091610" in "kube-system" namespace has status "Ready":"False"
	I0819 12:20:35.216714  501046 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-091610" in "kube-system" namespace has status "Ready":"False"
	I0819 12:20:37.717623  501046 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-091610" in "kube-system" namespace has status "Ready":"False"
	I0819 12:20:40.216524  501046 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-091610" in "kube-system" namespace has status "Ready":"False"
	I0819 12:20:42.219072  501046 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-091610" in "kube-system" namespace has status "Ready":"False"
	I0819 12:20:44.716286  501046 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-091610" in "kube-system" namespace has status "Ready":"False"
	I0819 12:20:47.216192  501046 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-091610" in "kube-system" namespace has status "Ready":"False"
	I0819 12:20:49.221331  501046 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-091610" in "kube-system" namespace has status "Ready":"False"
	I0819 12:20:51.717029  501046 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-091610" in "kube-system" namespace has status "Ready":"False"
	I0819 12:20:54.217361  501046 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-091610" in "kube-system" namespace has status "Ready":"False"
	I0819 12:20:56.716093  501046 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-091610" in "kube-system" namespace has status "Ready":"False"
	I0819 12:20:58.717156  501046 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-091610" in "kube-system" namespace has status "Ready":"False"
	I0819 12:21:01.218048  501046 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-091610" in "kube-system" namespace has status "Ready":"False"
	I0819 12:21:03.228971  501046 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-091610" in "kube-system" namespace has status "Ready":"False"
	I0819 12:21:05.715650  501046 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-091610" in "kube-system" namespace has status "Ready":"False"
	I0819 12:21:07.716434  501046 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-091610" in "kube-system" namespace has status "Ready":"False"
	I0819 12:21:10.217804  501046 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-091610" in "kube-system" namespace has status "Ready":"False"
	I0819 12:21:12.732164  501046 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-091610" in "kube-system" namespace has status "Ready":"False"
	I0819 12:21:13.215594  501046 pod_ready.go:93] pod "kube-controller-manager-old-k8s-version-091610" in "kube-system" namespace has status "Ready":"True"
	I0819 12:21:13.215623  501046 pod_ready.go:82] duration metric: took 1m24.006129708s for pod "kube-controller-manager-old-k8s-version-091610" in "kube-system" namespace to be "Ready" ...
	I0819 12:21:13.215635  501046 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-g2lvm" in "kube-system" namespace to be "Ready" ...
	I0819 12:21:13.220885  501046 pod_ready.go:93] pod "kube-proxy-g2lvm" in "kube-system" namespace has status "Ready":"True"
	I0819 12:21:13.220915  501046 pod_ready.go:82] duration metric: took 5.271948ms for pod "kube-proxy-g2lvm" in "kube-system" namespace to be "Ready" ...
	I0819 12:21:13.220927  501046 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-old-k8s-version-091610" in "kube-system" namespace to be "Ready" ...
	I0819 12:21:13.225808  501046 pod_ready.go:93] pod "kube-scheduler-old-k8s-version-091610" in "kube-system" namespace has status "Ready":"True"
	I0819 12:21:13.225834  501046 pod_ready.go:82] duration metric: took 4.899271ms for pod "kube-scheduler-old-k8s-version-091610" in "kube-system" namespace to be "Ready" ...
	I0819 12:21:13.225846  501046 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-9975d5f86-zb7nt" in "kube-system" namespace to be "Ready" ...
	I0819 12:21:15.239483  501046 pod_ready.go:103] pod "metrics-server-9975d5f86-zb7nt" in "kube-system" namespace has status "Ready":"False"
	I0819 12:21:17.732425  501046 pod_ready.go:103] pod "metrics-server-9975d5f86-zb7nt" in "kube-system" namespace has status "Ready":"False"
	I0819 12:21:19.734060  501046 pod_ready.go:103] pod "metrics-server-9975d5f86-zb7nt" in "kube-system" namespace has status "Ready":"False"
	I0819 12:21:22.232634  501046 pod_ready.go:103] pod "metrics-server-9975d5f86-zb7nt" in "kube-system" namespace has status "Ready":"False"
	I0819 12:21:24.732313  501046 pod_ready.go:103] pod "metrics-server-9975d5f86-zb7nt" in "kube-system" namespace has status "Ready":"False"
	I0819 12:21:27.231949  501046 pod_ready.go:103] pod "metrics-server-9975d5f86-zb7nt" in "kube-system" namespace has status "Ready":"False"
	I0819 12:21:29.232464  501046 pod_ready.go:103] pod "metrics-server-9975d5f86-zb7nt" in "kube-system" namespace has status "Ready":"False"
	I0819 12:21:31.232512  501046 pod_ready.go:103] pod "metrics-server-9975d5f86-zb7nt" in "kube-system" namespace has status "Ready":"False"
	I0819 12:21:33.734258  501046 pod_ready.go:103] pod "metrics-server-9975d5f86-zb7nt" in "kube-system" namespace has status "Ready":"False"
	I0819 12:21:36.233780  501046 pod_ready.go:103] pod "metrics-server-9975d5f86-zb7nt" in "kube-system" namespace has status "Ready":"False"
	I0819 12:21:38.733849  501046 pod_ready.go:103] pod "metrics-server-9975d5f86-zb7nt" in "kube-system" namespace has status "Ready":"False"
	I0819 12:21:41.231720  501046 pod_ready.go:103] pod "metrics-server-9975d5f86-zb7nt" in "kube-system" namespace has status "Ready":"False"
	I0819 12:21:43.232362  501046 pod_ready.go:103] pod "metrics-server-9975d5f86-zb7nt" in "kube-system" namespace has status "Ready":"False"
	I0819 12:21:45.251210  501046 pod_ready.go:103] pod "metrics-server-9975d5f86-zb7nt" in "kube-system" namespace has status "Ready":"False"
	I0819 12:21:47.732330  501046 pod_ready.go:103] pod "metrics-server-9975d5f86-zb7nt" in "kube-system" namespace has status "Ready":"False"
	I0819 12:21:49.753397  501046 pod_ready.go:103] pod "metrics-server-9975d5f86-zb7nt" in "kube-system" namespace has status "Ready":"False"
	I0819 12:21:52.231776  501046 pod_ready.go:103] pod "metrics-server-9975d5f86-zb7nt" in "kube-system" namespace has status "Ready":"False"
	I0819 12:21:54.232224  501046 pod_ready.go:103] pod "metrics-server-9975d5f86-zb7nt" in "kube-system" namespace has status "Ready":"False"
	I0819 12:21:56.234069  501046 pod_ready.go:103] pod "metrics-server-9975d5f86-zb7nt" in "kube-system" namespace has status "Ready":"False"
	I0819 12:21:58.731973  501046 pod_ready.go:103] pod "metrics-server-9975d5f86-zb7nt" in "kube-system" namespace has status "Ready":"False"
	I0819 12:22:01.232467  501046 pod_ready.go:103] pod "metrics-server-9975d5f86-zb7nt" in "kube-system" namespace has status "Ready":"False"
	I0819 12:22:03.232512  501046 pod_ready.go:103] pod "metrics-server-9975d5f86-zb7nt" in "kube-system" namespace has status "Ready":"False"
	I0819 12:22:05.731717  501046 pod_ready.go:103] pod "metrics-server-9975d5f86-zb7nt" in "kube-system" namespace has status "Ready":"False"
	I0819 12:22:07.732883  501046 pod_ready.go:103] pod "metrics-server-9975d5f86-zb7nt" in "kube-system" namespace has status "Ready":"False"
	I0819 12:22:09.735197  501046 pod_ready.go:103] pod "metrics-server-9975d5f86-zb7nt" in "kube-system" namespace has status "Ready":"False"
	I0819 12:22:12.232783  501046 pod_ready.go:103] pod "metrics-server-9975d5f86-zb7nt" in "kube-system" namespace has status "Ready":"False"
	I0819 12:22:14.731676  501046 pod_ready.go:103] pod "metrics-server-9975d5f86-zb7nt" in "kube-system" namespace has status "Ready":"False"
	I0819 12:22:16.732169  501046 pod_ready.go:103] pod "metrics-server-9975d5f86-zb7nt" in "kube-system" namespace has status "Ready":"False"
	I0819 12:22:18.732304  501046 pod_ready.go:103] pod "metrics-server-9975d5f86-zb7nt" in "kube-system" namespace has status "Ready":"False"
	I0819 12:22:21.232625  501046 pod_ready.go:103] pod "metrics-server-9975d5f86-zb7nt" in "kube-system" namespace has status "Ready":"False"
	I0819 12:22:23.233631  501046 pod_ready.go:103] pod "metrics-server-9975d5f86-zb7nt" in "kube-system" namespace has status "Ready":"False"
	I0819 12:22:25.732121  501046 pod_ready.go:103] pod "metrics-server-9975d5f86-zb7nt" in "kube-system" namespace has status "Ready":"False"
	I0819 12:22:27.732549  501046 pod_ready.go:103] pod "metrics-server-9975d5f86-zb7nt" in "kube-system" namespace has status "Ready":"False"
	I0819 12:22:30.232291  501046 pod_ready.go:103] pod "metrics-server-9975d5f86-zb7nt" in "kube-system" namespace has status "Ready":"False"
	I0819 12:22:32.232836  501046 pod_ready.go:103] pod "metrics-server-9975d5f86-zb7nt" in "kube-system" namespace has status "Ready":"False"
	I0819 12:22:34.732648  501046 pod_ready.go:103] pod "metrics-server-9975d5f86-zb7nt" in "kube-system" namespace has status "Ready":"False"
	I0819 12:22:37.232237  501046 pod_ready.go:103] pod "metrics-server-9975d5f86-zb7nt" in "kube-system" namespace has status "Ready":"False"
	I0819 12:22:39.232533  501046 pod_ready.go:103] pod "metrics-server-9975d5f86-zb7nt" in "kube-system" namespace has status "Ready":"False"
	I0819 12:22:41.731978  501046 pod_ready.go:103] pod "metrics-server-9975d5f86-zb7nt" in "kube-system" namespace has status "Ready":"False"
	I0819 12:22:43.740253  501046 pod_ready.go:103] pod "metrics-server-9975d5f86-zb7nt" in "kube-system" namespace has status "Ready":"False"
	I0819 12:22:46.231992  501046 pod_ready.go:103] pod "metrics-server-9975d5f86-zb7nt" in "kube-system" namespace has status "Ready":"False"
	I0819 12:22:48.232118  501046 pod_ready.go:103] pod "metrics-server-9975d5f86-zb7nt" in "kube-system" namespace has status "Ready":"False"
	I0819 12:22:50.232302  501046 pod_ready.go:103] pod "metrics-server-9975d5f86-zb7nt" in "kube-system" namespace has status "Ready":"False"
	I0819 12:22:52.232653  501046 pod_ready.go:103] pod "metrics-server-9975d5f86-zb7nt" in "kube-system" namespace has status "Ready":"False"
	I0819 12:22:54.233206  501046 pod_ready.go:103] pod "metrics-server-9975d5f86-zb7nt" in "kube-system" namespace has status "Ready":"False"
	I0819 12:22:56.732617  501046 pod_ready.go:103] pod "metrics-server-9975d5f86-zb7nt" in "kube-system" namespace has status "Ready":"False"
	I0819 12:22:59.232475  501046 pod_ready.go:103] pod "metrics-server-9975d5f86-zb7nt" in "kube-system" namespace has status "Ready":"False"
	I0819 12:23:01.234267  501046 pod_ready.go:103] pod "metrics-server-9975d5f86-zb7nt" in "kube-system" namespace has status "Ready":"False"
	I0819 12:23:03.732615  501046 pod_ready.go:103] pod "metrics-server-9975d5f86-zb7nt" in "kube-system" namespace has status "Ready":"False"
	I0819 12:23:06.232841  501046 pod_ready.go:103] pod "metrics-server-9975d5f86-zb7nt" in "kube-system" namespace has status "Ready":"False"
	I0819 12:23:08.233188  501046 pod_ready.go:103] pod "metrics-server-9975d5f86-zb7nt" in "kube-system" namespace has status "Ready":"False"
	I0819 12:23:10.235912  501046 pod_ready.go:103] pod "metrics-server-9975d5f86-zb7nt" in "kube-system" namespace has status "Ready":"False"
	I0819 12:23:12.732867  501046 pod_ready.go:103] pod "metrics-server-9975d5f86-zb7nt" in "kube-system" namespace has status "Ready":"False"
	I0819 12:23:15.233354  501046 pod_ready.go:103] pod "metrics-server-9975d5f86-zb7nt" in "kube-system" namespace has status "Ready":"False"
	I0819 12:23:17.732213  501046 pod_ready.go:103] pod "metrics-server-9975d5f86-zb7nt" in "kube-system" namespace has status "Ready":"False"
	I0819 12:23:19.732786  501046 pod_ready.go:103] pod "metrics-server-9975d5f86-zb7nt" in "kube-system" namespace has status "Ready":"False"
	I0819 12:23:22.232422  501046 pod_ready.go:103] pod "metrics-server-9975d5f86-zb7nt" in "kube-system" namespace has status "Ready":"False"
	I0819 12:23:24.732496  501046 pod_ready.go:103] pod "metrics-server-9975d5f86-zb7nt" in "kube-system" namespace has status "Ready":"False"
	I0819 12:23:26.733172  501046 pod_ready.go:103] pod "metrics-server-9975d5f86-zb7nt" in "kube-system" namespace has status "Ready":"False"
	I0819 12:23:28.734947  501046 pod_ready.go:103] pod "metrics-server-9975d5f86-zb7nt" in "kube-system" namespace has status "Ready":"False"
	I0819 12:23:30.751249  501046 pod_ready.go:103] pod "metrics-server-9975d5f86-zb7nt" in "kube-system" namespace has status "Ready":"False"
	I0819 12:23:33.233010  501046 pod_ready.go:103] pod "metrics-server-9975d5f86-zb7nt" in "kube-system" namespace has status "Ready":"False"
	I0819 12:23:35.233661  501046 pod_ready.go:103] pod "metrics-server-9975d5f86-zb7nt" in "kube-system" namespace has status "Ready":"False"
	I0819 12:23:37.733373  501046 pod_ready.go:103] pod "metrics-server-9975d5f86-zb7nt" in "kube-system" namespace has status "Ready":"False"
	I0819 12:23:40.231990  501046 pod_ready.go:103] pod "metrics-server-9975d5f86-zb7nt" in "kube-system" namespace has status "Ready":"False"
	I0819 12:23:42.234703  501046 pod_ready.go:103] pod "metrics-server-9975d5f86-zb7nt" in "kube-system" namespace has status "Ready":"False"
	I0819 12:23:44.732581  501046 pod_ready.go:103] pod "metrics-server-9975d5f86-zb7nt" in "kube-system" namespace has status "Ready":"False"
	I0819 12:23:46.732863  501046 pod_ready.go:103] pod "metrics-server-9975d5f86-zb7nt" in "kube-system" namespace has status "Ready":"False"
	I0819 12:23:49.232267  501046 pod_ready.go:103] pod "metrics-server-9975d5f86-zb7nt" in "kube-system" namespace has status "Ready":"False"
	I0819 12:23:51.232776  501046 pod_ready.go:103] pod "metrics-server-9975d5f86-zb7nt" in "kube-system" namespace has status "Ready":"False"
	I0819 12:23:53.732192  501046 pod_ready.go:103] pod "metrics-server-9975d5f86-zb7nt" in "kube-system" namespace has status "Ready":"False"
	I0819 12:23:55.732525  501046 pod_ready.go:103] pod "metrics-server-9975d5f86-zb7nt" in "kube-system" namespace has status "Ready":"False"
	I0819 12:23:58.233663  501046 pod_ready.go:103] pod "metrics-server-9975d5f86-zb7nt" in "kube-system" namespace has status "Ready":"False"
	I0819 12:24:00.276863  501046 pod_ready.go:103] pod "metrics-server-9975d5f86-zb7nt" in "kube-system" namespace has status "Ready":"False"
	I0819 12:24:02.731628  501046 pod_ready.go:103] pod "metrics-server-9975d5f86-zb7nt" in "kube-system" namespace has status "Ready":"False"
	I0819 12:24:04.733426  501046 pod_ready.go:103] pod "metrics-server-9975d5f86-zb7nt" in "kube-system" namespace has status "Ready":"False"
	I0819 12:24:07.232447  501046 pod_ready.go:103] pod "metrics-server-9975d5f86-zb7nt" in "kube-system" namespace has status "Ready":"False"
	I0819 12:24:09.233530  501046 pod_ready.go:103] pod "metrics-server-9975d5f86-zb7nt" in "kube-system" namespace has status "Ready":"False"
	I0819 12:24:11.732403  501046 pod_ready.go:103] pod "metrics-server-9975d5f86-zb7nt" in "kube-system" namespace has status "Ready":"False"
	I0819 12:24:14.232975  501046 pod_ready.go:103] pod "metrics-server-9975d5f86-zb7nt" in "kube-system" namespace has status "Ready":"False"
	I0819 12:24:16.732303  501046 pod_ready.go:103] pod "metrics-server-9975d5f86-zb7nt" in "kube-system" namespace has status "Ready":"False"
	I0819 12:24:18.732401  501046 pod_ready.go:103] pod "metrics-server-9975d5f86-zb7nt" in "kube-system" namespace has status "Ready":"False"
	I0819 12:24:20.741326  501046 pod_ready.go:103] pod "metrics-server-9975d5f86-zb7nt" in "kube-system" namespace has status "Ready":"False"
	I0819 12:24:23.233107  501046 pod_ready.go:103] pod "metrics-server-9975d5f86-zb7nt" in "kube-system" namespace has status "Ready":"False"
	I0819 12:24:25.732223  501046 pod_ready.go:103] pod "metrics-server-9975d5f86-zb7nt" in "kube-system" namespace has status "Ready":"False"
	I0819 12:24:28.232138  501046 pod_ready.go:103] pod "metrics-server-9975d5f86-zb7nt" in "kube-system" namespace has status "Ready":"False"
	I0819 12:24:30.233480  501046 pod_ready.go:103] pod "metrics-server-9975d5f86-zb7nt" in "kube-system" namespace has status "Ready":"False"
	I0819 12:24:32.732556  501046 pod_ready.go:103] pod "metrics-server-9975d5f86-zb7nt" in "kube-system" namespace has status "Ready":"False"
	I0819 12:24:35.233051  501046 pod_ready.go:103] pod "metrics-server-9975d5f86-zb7nt" in "kube-system" namespace has status "Ready":"False"
	I0819 12:24:37.732495  501046 pod_ready.go:103] pod "metrics-server-9975d5f86-zb7nt" in "kube-system" namespace has status "Ready":"False"
	I0819 12:24:39.732707  501046 pod_ready.go:103] pod "metrics-server-9975d5f86-zb7nt" in "kube-system" namespace has status "Ready":"False"
	I0819 12:24:42.235719  501046 pod_ready.go:103] pod "metrics-server-9975d5f86-zb7nt" in "kube-system" namespace has status "Ready":"False"
	I0819 12:24:44.732856  501046 pod_ready.go:103] pod "metrics-server-9975d5f86-zb7nt" in "kube-system" namespace has status "Ready":"False"
	I0819 12:24:46.733493  501046 pod_ready.go:103] pod "metrics-server-9975d5f86-zb7nt" in "kube-system" namespace has status "Ready":"False"
	I0819 12:24:49.232408  501046 pod_ready.go:103] pod "metrics-server-9975d5f86-zb7nt" in "kube-system" namespace has status "Ready":"False"
	I0819 12:24:51.733161  501046 pod_ready.go:103] pod "metrics-server-9975d5f86-zb7nt" in "kube-system" namespace has status "Ready":"False"
	I0819 12:24:53.733703  501046 pod_ready.go:103] pod "metrics-server-9975d5f86-zb7nt" in "kube-system" namespace has status "Ready":"False"
	I0819 12:24:56.232830  501046 pod_ready.go:103] pod "metrics-server-9975d5f86-zb7nt" in "kube-system" namespace has status "Ready":"False"
	I0819 12:24:58.732195  501046 pod_ready.go:103] pod "metrics-server-9975d5f86-zb7nt" in "kube-system" namespace has status "Ready":"False"
	I0819 12:25:00.733771  501046 pod_ready.go:103] pod "metrics-server-9975d5f86-zb7nt" in "kube-system" namespace has status "Ready":"False"
	I0819 12:25:03.232645  501046 pod_ready.go:103] pod "metrics-server-9975d5f86-zb7nt" in "kube-system" namespace has status "Ready":"False"
	I0819 12:25:05.737116  501046 pod_ready.go:103] pod "metrics-server-9975d5f86-zb7nt" in "kube-system" namespace has status "Ready":"False"
	I0819 12:25:08.232082  501046 pod_ready.go:103] pod "metrics-server-9975d5f86-zb7nt" in "kube-system" namespace has status "Ready":"False"
	I0819 12:25:10.233206  501046 pod_ready.go:103] pod "metrics-server-9975d5f86-zb7nt" in "kube-system" namespace has status "Ready":"False"
	I0819 12:25:12.732250  501046 pod_ready.go:103] pod "metrics-server-9975d5f86-zb7nt" in "kube-system" namespace has status "Ready":"False"
	I0819 12:25:13.231680  501046 pod_ready.go:82] duration metric: took 4m0.005819776s for pod "metrics-server-9975d5f86-zb7nt" in "kube-system" namespace to be "Ready" ...
	E0819 12:25:13.231709  501046 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I0819 12:25:13.231721  501046 pod_ready.go:39] duration metric: took 5m24.266315688s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0819 12:25:13.231736  501046 api_server.go:52] waiting for apiserver process to appear ...
	I0819 12:25:13.231764  501046 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0819 12:25:13.231827  501046 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 12:25:13.278233  501046 cri.go:89] found id: "4ce61d87754c1a74c50315141a3956f04c053f6e9bf8ed92eb2f1d41f61bac61"
	I0819 12:25:13.278259  501046 cri.go:89] found id: "448906379c25acacfbe73890ef79d2faf13a76a8f18880099fda6187c53b0152"
	I0819 12:25:13.278275  501046 cri.go:89] found id: ""
	I0819 12:25:13.278283  501046 logs.go:276] 2 containers: [4ce61d87754c1a74c50315141a3956f04c053f6e9bf8ed92eb2f1d41f61bac61 448906379c25acacfbe73890ef79d2faf13a76a8f18880099fda6187c53b0152]
	I0819 12:25:13.278350  501046 ssh_runner.go:195] Run: which crictl
	I0819 12:25:13.282134  501046 ssh_runner.go:195] Run: which crictl
	I0819 12:25:13.285650  501046 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0819 12:25:13.285720  501046 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 12:25:13.336849  501046 cri.go:89] found id: "f2b278acf70fb649b3b13d726b38e5b951b7950e84be6176979a50e06c284635"
	I0819 12:25:13.336869  501046 cri.go:89] found id: "f96bb26d0d9fd54167ea4eedbda32851e7e6ed986c5d18edddbfb9d015c80aa8"
	I0819 12:25:13.336874  501046 cri.go:89] found id: ""
	I0819 12:25:13.336882  501046 logs.go:276] 2 containers: [f2b278acf70fb649b3b13d726b38e5b951b7950e84be6176979a50e06c284635 f96bb26d0d9fd54167ea4eedbda32851e7e6ed986c5d18edddbfb9d015c80aa8]
	I0819 12:25:13.336938  501046 ssh_runner.go:195] Run: which crictl
	I0819 12:25:13.343448  501046 ssh_runner.go:195] Run: which crictl
	I0819 12:25:13.354543  501046 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0819 12:25:13.354614  501046 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 12:25:13.396355  501046 cri.go:89] found id: "a9d449177d2f20bb4e0279df0064dc677dffa2194c0bfd6deb8af6688e466210"
	I0819 12:25:13.396380  501046 cri.go:89] found id: "52772681d7f9b14520c48d8c42d715a82c430226dad2db97bca20cde5180797d"
	I0819 12:25:13.396386  501046 cri.go:89] found id: ""
	I0819 12:25:13.396395  501046 logs.go:276] 2 containers: [a9d449177d2f20bb4e0279df0064dc677dffa2194c0bfd6deb8af6688e466210 52772681d7f9b14520c48d8c42d715a82c430226dad2db97bca20cde5180797d]
	I0819 12:25:13.396504  501046 ssh_runner.go:195] Run: which crictl
	I0819 12:25:13.400677  501046 ssh_runner.go:195] Run: which crictl
	I0819 12:25:13.404578  501046 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0819 12:25:13.404709  501046 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 12:25:13.452898  501046 cri.go:89] found id: "309ceea1b6362201d696baaee9f77608d461fce74f091280d398876fad125712"
	I0819 12:25:13.452919  501046 cri.go:89] found id: "1ef419f5f067970c35ce55d03d0e7a36fdebdd452881d8f478b7e537af217a32"
	I0819 12:25:13.452924  501046 cri.go:89] found id: ""
	I0819 12:25:13.452931  501046 logs.go:276] 2 containers: [309ceea1b6362201d696baaee9f77608d461fce74f091280d398876fad125712 1ef419f5f067970c35ce55d03d0e7a36fdebdd452881d8f478b7e537af217a32]
	I0819 12:25:13.452991  501046 ssh_runner.go:195] Run: which crictl
	I0819 12:25:13.457423  501046 ssh_runner.go:195] Run: which crictl
	I0819 12:25:13.461432  501046 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0819 12:25:13.461554  501046 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 12:25:13.522946  501046 cri.go:89] found id: "8581310ffb6da62d730c00416e9d418c1fd194d0459e551c98e90cd0193dc9be"
	I0819 12:25:13.522971  501046 cri.go:89] found id: "495863fa417577517a6659a9363e132d473fde25375c55ba292884732c5b5cc6"
	I0819 12:25:13.522976  501046 cri.go:89] found id: ""
	I0819 12:25:13.522983  501046 logs.go:276] 2 containers: [8581310ffb6da62d730c00416e9d418c1fd194d0459e551c98e90cd0193dc9be 495863fa417577517a6659a9363e132d473fde25375c55ba292884732c5b5cc6]
	I0819 12:25:13.523075  501046 ssh_runner.go:195] Run: which crictl
	I0819 12:25:13.527003  501046 ssh_runner.go:195] Run: which crictl
	I0819 12:25:13.530703  501046 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 12:25:13.530803  501046 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 12:25:13.575754  501046 cri.go:89] found id: "ff22b0055b8eb8b604f52a3bf6c5df44fc6b0ff5546a72d4f2b20b85080af524"
	I0819 12:25:13.575778  501046 cri.go:89] found id: "b8c6ba6c65d67f62a14421151c3013537c37cfcf1bc0b08d90d27bda4241f566"
	I0819 12:25:13.575783  501046 cri.go:89] found id: ""
	I0819 12:25:13.575802  501046 logs.go:276] 2 containers: [ff22b0055b8eb8b604f52a3bf6c5df44fc6b0ff5546a72d4f2b20b85080af524 b8c6ba6c65d67f62a14421151c3013537c37cfcf1bc0b08d90d27bda4241f566]
	I0819 12:25:13.575883  501046 ssh_runner.go:195] Run: which crictl
	I0819 12:25:13.579970  501046 ssh_runner.go:195] Run: which crictl
	I0819 12:25:13.583748  501046 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0819 12:25:13.583830  501046 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 12:25:13.623809  501046 cri.go:89] found id: "312b3b2145bf1c233ccc80613e0c67129fa905992c3e22c9a71929f05341b98f"
	I0819 12:25:13.623844  501046 cri.go:89] found id: "ff792901aeab0bc28facb26abfa78879ae3a7f0e523d2f2d7a83d2138d80c10b"
	I0819 12:25:13.623848  501046 cri.go:89] found id: ""
	I0819 12:25:13.623856  501046 logs.go:276] 2 containers: [312b3b2145bf1c233ccc80613e0c67129fa905992c3e22c9a71929f05341b98f ff792901aeab0bc28facb26abfa78879ae3a7f0e523d2f2d7a83d2138d80c10b]
	I0819 12:25:13.623932  501046 ssh_runner.go:195] Run: which crictl
	I0819 12:25:13.627973  501046 ssh_runner.go:195] Run: which crictl
	I0819 12:25:13.632210  501046 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0819 12:25:13.632333  501046 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0819 12:25:13.692474  501046 cri.go:89] found id: "d6ba97b27a6fcb89c2d05f135c98e06be837786617de7736914f6711ab33c929"
	I0819 12:25:13.692537  501046 cri.go:89] found id: "7a63c07299e71c1920b148c4c2cd68ce0fc64d5359eb08ebc374e073275d266b"
	I0819 12:25:13.692549  501046 cri.go:89] found id: ""
	I0819 12:25:13.692557  501046 logs.go:276] 2 containers: [d6ba97b27a6fcb89c2d05f135c98e06be837786617de7736914f6711ab33c929 7a63c07299e71c1920b148c4c2cd68ce0fc64d5359eb08ebc374e073275d266b]
	I0819 12:25:13.692619  501046 ssh_runner.go:195] Run: which crictl
	I0819 12:25:13.697385  501046 ssh_runner.go:195] Run: which crictl
	I0819 12:25:13.701826  501046 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 12:25:13.701927  501046 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 12:25:13.745714  501046 cri.go:89] found id: "e333f18f594f1e8bfc8059d2d10fca8e2977c7d6931e0738413013bc0a844e60"
	I0819 12:25:13.745787  501046 cri.go:89] found id: ""
	I0819 12:25:13.745800  501046 logs.go:276] 1 containers: [e333f18f594f1e8bfc8059d2d10fca8e2977c7d6931e0738413013bc0a844e60]
	I0819 12:25:13.745875  501046 ssh_runner.go:195] Run: which crictl
	I0819 12:25:13.749858  501046 logs.go:123] Gathering logs for kube-apiserver [448906379c25acacfbe73890ef79d2faf13a76a8f18880099fda6187c53b0152] ...
	I0819 12:25:13.749885  501046 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 448906379c25acacfbe73890ef79d2faf13a76a8f18880099fda6187c53b0152"
	I0819 12:25:13.815438  501046 logs.go:123] Gathering logs for etcd [f2b278acf70fb649b3b13d726b38e5b951b7950e84be6176979a50e06c284635] ...
	I0819 12:25:13.815473  501046 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f2b278acf70fb649b3b13d726b38e5b951b7950e84be6176979a50e06c284635"
	I0819 12:25:13.860833  501046 logs.go:123] Gathering logs for etcd [f96bb26d0d9fd54167ea4eedbda32851e7e6ed986c5d18edddbfb9d015c80aa8] ...
	I0819 12:25:13.860871  501046 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f96bb26d0d9fd54167ea4eedbda32851e7e6ed986c5d18edddbfb9d015c80aa8"
	I0819 12:25:13.901919  501046 logs.go:123] Gathering logs for kube-scheduler [309ceea1b6362201d696baaee9f77608d461fce74f091280d398876fad125712] ...
	I0819 12:25:13.901947  501046 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 309ceea1b6362201d696baaee9f77608d461fce74f091280d398876fad125712"
	I0819 12:25:13.948288  501046 logs.go:123] Gathering logs for storage-provisioner [d6ba97b27a6fcb89c2d05f135c98e06be837786617de7736914f6711ab33c929] ...
	I0819 12:25:13.948323  501046 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d6ba97b27a6fcb89c2d05f135c98e06be837786617de7736914f6711ab33c929"
	I0819 12:25:13.991333  501046 logs.go:123] Gathering logs for kube-apiserver [4ce61d87754c1a74c50315141a3956f04c053f6e9bf8ed92eb2f1d41f61bac61] ...
	I0819 12:25:13.991362  501046 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4ce61d87754c1a74c50315141a3956f04c053f6e9bf8ed92eb2f1d41f61bac61"
	I0819 12:25:14.058020  501046 logs.go:123] Gathering logs for kube-proxy [8581310ffb6da62d730c00416e9d418c1fd194d0459e551c98e90cd0193dc9be] ...
	I0819 12:25:14.058057  501046 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8581310ffb6da62d730c00416e9d418c1fd194d0459e551c98e90cd0193dc9be"
	I0819 12:25:14.101288  501046 logs.go:123] Gathering logs for container status ...
	I0819 12:25:14.101321  501046 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 12:25:14.146833  501046 logs.go:123] Gathering logs for kubelet ...
	I0819 12:25:14.146864  501046 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0819 12:25:14.201698  501046 logs.go:138] Found kubelet problem: Aug 19 12:19:48 old-k8s-version-091610 kubelet[673]: E0819 12:19:48.865111     673 reflector.go:138] object-"default"/"default-token-ddbn8": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "default-token-ddbn8" is forbidden: User "system:node:old-k8s-version-091610" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'old-k8s-version-091610' and this object
	W0819 12:25:14.201957  501046 logs.go:138] Found kubelet problem: Aug 19 12:19:48 old-k8s-version-091610 kubelet[673]: E0819 12:19:48.865470     673 reflector.go:138] object-"kube-system"/"coredns-token-24w5r": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "coredns-token-24w5r" is forbidden: User "system:node:old-k8s-version-091610" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-091610' and this object
	W0819 12:25:14.202184  501046 logs.go:138] Found kubelet problem: Aug 19 12:19:48 old-k8s-version-091610 kubelet[673]: E0819 12:19:48.865674     673 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:old-k8s-version-091610" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-091610' and this object
	W0819 12:25:14.202395  501046 logs.go:138] Found kubelet problem: Aug 19 12:19:48 old-k8s-version-091610 kubelet[673]: E0819 12:19:48.865769     673 reflector.go:138] object-"kube-system"/"kindnet-token-45phz": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kindnet-token-45phz" is forbidden: User "system:node:old-k8s-version-091610" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-091610' and this object
	W0819 12:25:14.202610  501046 logs.go:138] Found kubelet problem: Aug 19 12:19:48 old-k8s-version-091610 kubelet[673]: E0819 12:19:48.866431     673 reflector.go:138] object-"kube-system"/"kube-proxy-token-6m5lt": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kube-proxy-token-6m5lt" is forbidden: User "system:node:old-k8s-version-091610" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-091610' and this object
	W0819 12:25:14.202838  501046 logs.go:138] Found kubelet problem: Aug 19 12:19:48 old-k8s-version-091610 kubelet[673]: E0819 12:19:48.866509     673 reflector.go:138] object-"kube-system"/"storage-provisioner-token-lvtph": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "storage-provisioner-token-lvtph" is forbidden: User "system:node:old-k8s-version-091610" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-091610' and this object
	W0819 12:25:14.203068  501046 logs.go:138] Found kubelet problem: Aug 19 12:19:48 old-k8s-version-091610 kubelet[673]: E0819 12:19:48.866580     673 reflector.go:138] object-"kube-system"/"metrics-server-token-hgch9": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "metrics-server-token-hgch9" is forbidden: User "system:node:old-k8s-version-091610" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-091610' and this object
	W0819 12:25:14.203423  501046 logs.go:138] Found kubelet problem: Aug 19 12:19:48 old-k8s-version-091610 kubelet[673]: E0819 12:19:48.866601     673 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:old-k8s-version-091610" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-091610' and this object
	W0819 12:25:14.211248  501046 logs.go:138] Found kubelet problem: Aug 19 12:19:51 old-k8s-version-091610 kubelet[673]: E0819 12:19:51.439458     673 pod_workers.go:191] Error syncing pod 4085e2df-7e89-44a8-b234-c4b001bdff1d ("metrics-server-9975d5f86-zb7nt_kube-system(4085e2df-7e89-44a8-b234-c4b001bdff1d)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W0819 12:25:14.212820  501046 logs.go:138] Found kubelet problem: Aug 19 12:19:51 old-k8s-version-091610 kubelet[673]: E0819 12:19:51.960458     673 pod_workers.go:191] Error syncing pod 4085e2df-7e89-44a8-b234-c4b001bdff1d ("metrics-server-9975d5f86-zb7nt_kube-system(4085e2df-7e89-44a8-b234-c4b001bdff1d)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0819 12:25:14.215642  501046 logs.go:138] Found kubelet problem: Aug 19 12:20:04 old-k8s-version-091610 kubelet[673]: E0819 12:20:04.809492     673 pod_workers.go:191] Error syncing pod 4085e2df-7e89-44a8-b234-c4b001bdff1d ("metrics-server-9975d5f86-zb7nt_kube-system(4085e2df-7e89-44a8-b234-c4b001bdff1d)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W0819 12:25:14.217432  501046 logs.go:138] Found kubelet problem: Aug 19 12:20:15 old-k8s-version-091610 kubelet[673]: E0819 12:20:15.775810     673 pod_workers.go:191] Error syncing pod 4085e2df-7e89-44a8-b234-c4b001bdff1d ("metrics-server-9975d5f86-zb7nt_kube-system(4085e2df-7e89-44a8-b234-c4b001bdff1d)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0819 12:25:14.217895  501046 logs.go:138] Found kubelet problem: Aug 19 12:20:16 old-k8s-version-091610 kubelet[673]: E0819 12:20:16.146247     673 pod_workers.go:191] Error syncing pod 67d879d9-c2b3-4d91-8855-4f6007f01c6e ("dashboard-metrics-scraper-8d5bb5db8-kgs2g_kubernetes-dashboard(67d879d9-c2b3-4d91-8855-4f6007f01c6e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kgs2g_kubernetes-dashboard(67d879d9-c2b3-4d91-8855-4f6007f01c6e)"
	W0819 12:25:14.218235  501046 logs.go:138] Found kubelet problem: Aug 19 12:20:17 old-k8s-version-091610 kubelet[673]: E0819 12:20:17.149552     673 pod_workers.go:191] Error syncing pod 67d879d9-c2b3-4d91-8855-4f6007f01c6e ("dashboard-metrics-scraper-8d5bb5db8-kgs2g_kubernetes-dashboard(67d879d9-c2b3-4d91-8855-4f6007f01c6e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kgs2g_kubernetes-dashboard(67d879d9-c2b3-4d91-8855-4f6007f01c6e)"
	W0819 12:25:14.218564  501046 logs.go:138] Found kubelet problem: Aug 19 12:20:18 old-k8s-version-091610 kubelet[673]: E0819 12:20:18.503760     673 pod_workers.go:191] Error syncing pod 67d879d9-c2b3-4d91-8855-4f6007f01c6e ("dashboard-metrics-scraper-8d5bb5db8-kgs2g_kubernetes-dashboard(67d879d9-c2b3-4d91-8855-4f6007f01c6e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kgs2g_kubernetes-dashboard(67d879d9-c2b3-4d91-8855-4f6007f01c6e)"
	W0819 12:25:14.221343  501046 logs.go:138] Found kubelet problem: Aug 19 12:20:27 old-k8s-version-091610 kubelet[673]: E0819 12:20:27.783794     673 pod_workers.go:191] Error syncing pod 4085e2df-7e89-44a8-b234-c4b001bdff1d ("metrics-server-9975d5f86-zb7nt_kube-system(4085e2df-7e89-44a8-b234-c4b001bdff1d)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W0819 12:25:14.222272  501046 logs.go:138] Found kubelet problem: Aug 19 12:20:34 old-k8s-version-091610 kubelet[673]: E0819 12:20:34.224236     673 pod_workers.go:191] Error syncing pod 67d879d9-c2b3-4d91-8855-4f6007f01c6e ("dashboard-metrics-scraper-8d5bb5db8-kgs2g_kubernetes-dashboard(67d879d9-c2b3-4d91-8855-4f6007f01c6e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kgs2g_kubernetes-dashboard(67d879d9-c2b3-4d91-8855-4f6007f01c6e)"
	W0819 12:25:14.222598  501046 logs.go:138] Found kubelet problem: Aug 19 12:20:38 old-k8s-version-091610 kubelet[673]: E0819 12:20:38.503822     673 pod_workers.go:191] Error syncing pod 67d879d9-c2b3-4d91-8855-4f6007f01c6e ("dashboard-metrics-scraper-8d5bb5db8-kgs2g_kubernetes-dashboard(67d879d9-c2b3-4d91-8855-4f6007f01c6e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kgs2g_kubernetes-dashboard(67d879d9-c2b3-4d91-8855-4f6007f01c6e)"
	W0819 12:25:14.222781  501046 logs.go:138] Found kubelet problem: Aug 19 12:20:41 old-k8s-version-091610 kubelet[673]: E0819 12:20:41.779721     673 pod_workers.go:191] Error syncing pod 4085e2df-7e89-44a8-b234-c4b001bdff1d ("metrics-server-9975d5f86-zb7nt_kube-system(4085e2df-7e89-44a8-b234-c4b001bdff1d)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0819 12:25:14.223191  501046 logs.go:138] Found kubelet problem: Aug 19 12:20:49 old-k8s-version-091610 kubelet[673]: E0819 12:20:49.769540     673 pod_workers.go:191] Error syncing pod 67d879d9-c2b3-4d91-8855-4f6007f01c6e ("dashboard-metrics-scraper-8d5bb5db8-kgs2g_kubernetes-dashboard(67d879d9-c2b3-4d91-8855-4f6007f01c6e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kgs2g_kubernetes-dashboard(67d879d9-c2b3-4d91-8855-4f6007f01c6e)"
	W0819 12:25:14.223380  501046 logs.go:138] Found kubelet problem: Aug 19 12:20:53 old-k8s-version-091610 kubelet[673]: E0819 12:20:53.783516     673 pod_workers.go:191] Error syncing pod 4085e2df-7e89-44a8-b234-c4b001bdff1d ("metrics-server-9975d5f86-zb7nt_kube-system(4085e2df-7e89-44a8-b234-c4b001bdff1d)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0819 12:25:14.223970  501046 logs.go:138] Found kubelet problem: Aug 19 12:21:01 old-k8s-version-091610 kubelet[673]: E0819 12:21:01.379793     673 pod_workers.go:191] Error syncing pod 67d879d9-c2b3-4d91-8855-4f6007f01c6e ("dashboard-metrics-scraper-8d5bb5db8-kgs2g_kubernetes-dashboard(67d879d9-c2b3-4d91-8855-4f6007f01c6e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kgs2g_kubernetes-dashboard(67d879d9-c2b3-4d91-8855-4f6007f01c6e)"
	W0819 12:25:14.224153  501046 logs.go:138] Found kubelet problem: Aug 19 12:21:05 old-k8s-version-091610 kubelet[673]: E0819 12:21:05.770046     673 pod_workers.go:191] Error syncing pod 4085e2df-7e89-44a8-b234-c4b001bdff1d ("metrics-server-9975d5f86-zb7nt_kube-system(4085e2df-7e89-44a8-b234-c4b001bdff1d)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0819 12:25:14.224481  501046 logs.go:138] Found kubelet problem: Aug 19 12:21:08 old-k8s-version-091610 kubelet[673]: E0819 12:21:08.504303     673 pod_workers.go:191] Error syncing pod 67d879d9-c2b3-4d91-8855-4f6007f01c6e ("dashboard-metrics-scraper-8d5bb5db8-kgs2g_kubernetes-dashboard(67d879d9-c2b3-4d91-8855-4f6007f01c6e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kgs2g_kubernetes-dashboard(67d879d9-c2b3-4d91-8855-4f6007f01c6e)"
	W0819 12:25:14.226909  501046 logs.go:138] Found kubelet problem: Aug 19 12:21:19 old-k8s-version-091610 kubelet[673]: E0819 12:21:19.779165     673 pod_workers.go:191] Error syncing pod 4085e2df-7e89-44a8-b234-c4b001bdff1d ("metrics-server-9975d5f86-zb7nt_kube-system(4085e2df-7e89-44a8-b234-c4b001bdff1d)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W0819 12:25:14.227248  501046 logs.go:138] Found kubelet problem: Aug 19 12:21:22 old-k8s-version-091610 kubelet[673]: E0819 12:21:22.771924     673 pod_workers.go:191] Error syncing pod 67d879d9-c2b3-4d91-8855-4f6007f01c6e ("dashboard-metrics-scraper-8d5bb5db8-kgs2g_kubernetes-dashboard(67d879d9-c2b3-4d91-8855-4f6007f01c6e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kgs2g_kubernetes-dashboard(67d879d9-c2b3-4d91-8855-4f6007f01c6e)"
	W0819 12:25:14.227440  501046 logs.go:138] Found kubelet problem: Aug 19 12:21:31 old-k8s-version-091610 kubelet[673]: E0819 12:21:31.769891     673 pod_workers.go:191] Error syncing pod 4085e2df-7e89-44a8-b234-c4b001bdff1d ("metrics-server-9975d5f86-zb7nt_kube-system(4085e2df-7e89-44a8-b234-c4b001bdff1d)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0819 12:25:14.227798  501046 logs.go:138] Found kubelet problem: Aug 19 12:21:36 old-k8s-version-091610 kubelet[673]: E0819 12:21:36.770025     673 pod_workers.go:191] Error syncing pod 67d879d9-c2b3-4d91-8855-4f6007f01c6e ("dashboard-metrics-scraper-8d5bb5db8-kgs2g_kubernetes-dashboard(67d879d9-c2b3-4d91-8855-4f6007f01c6e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kgs2g_kubernetes-dashboard(67d879d9-c2b3-4d91-8855-4f6007f01c6e)"
	W0819 12:25:14.227984  501046 logs.go:138] Found kubelet problem: Aug 19 12:21:46 old-k8s-version-091610 kubelet[673]: E0819 12:21:46.779430     673 pod_workers.go:191] Error syncing pod 4085e2df-7e89-44a8-b234-c4b001bdff1d ("metrics-server-9975d5f86-zb7nt_kube-system(4085e2df-7e89-44a8-b234-c4b001bdff1d)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0819 12:25:14.228572  501046 logs.go:138] Found kubelet problem: Aug 19 12:21:48 old-k8s-version-091610 kubelet[673]: E0819 12:21:48.496386     673 pod_workers.go:191] Error syncing pod 67d879d9-c2b3-4d91-8855-4f6007f01c6e ("dashboard-metrics-scraper-8d5bb5db8-kgs2g_kubernetes-dashboard(67d879d9-c2b3-4d91-8855-4f6007f01c6e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kgs2g_kubernetes-dashboard(67d879d9-c2b3-4d91-8855-4f6007f01c6e)"
	W0819 12:25:14.228896  501046 logs.go:138] Found kubelet problem: Aug 19 12:21:49 old-k8s-version-091610 kubelet[673]: E0819 12:21:49.499775     673 pod_workers.go:191] Error syncing pod 67d879d9-c2b3-4d91-8855-4f6007f01c6e ("dashboard-metrics-scraper-8d5bb5db8-kgs2g_kubernetes-dashboard(67d879d9-c2b3-4d91-8855-4f6007f01c6e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kgs2g_kubernetes-dashboard(67d879d9-c2b3-4d91-8855-4f6007f01c6e)"
	W0819 12:25:14.229081  501046 logs.go:138] Found kubelet problem: Aug 19 12:21:59 old-k8s-version-091610 kubelet[673]: E0819 12:21:59.770059     673 pod_workers.go:191] Error syncing pod 4085e2df-7e89-44a8-b234-c4b001bdff1d ("metrics-server-9975d5f86-zb7nt_kube-system(4085e2df-7e89-44a8-b234-c4b001bdff1d)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0819 12:25:14.229409  501046 logs.go:138] Found kubelet problem: Aug 19 12:22:01 old-k8s-version-091610 kubelet[673]: E0819 12:22:01.769614     673 pod_workers.go:191] Error syncing pod 67d879d9-c2b3-4d91-8855-4f6007f01c6e ("dashboard-metrics-scraper-8d5bb5db8-kgs2g_kubernetes-dashboard(67d879d9-c2b3-4d91-8855-4f6007f01c6e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kgs2g_kubernetes-dashboard(67d879d9-c2b3-4d91-8855-4f6007f01c6e)"
	W0819 12:25:14.229593  501046 logs.go:138] Found kubelet problem: Aug 19 12:22:11 old-k8s-version-091610 kubelet[673]: E0819 12:22:11.770374     673 pod_workers.go:191] Error syncing pod 4085e2df-7e89-44a8-b234-c4b001bdff1d ("metrics-server-9975d5f86-zb7nt_kube-system(4085e2df-7e89-44a8-b234-c4b001bdff1d)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0819 12:25:14.229921  501046 logs.go:138] Found kubelet problem: Aug 19 12:22:12 old-k8s-version-091610 kubelet[673]: E0819 12:22:12.769816     673 pod_workers.go:191] Error syncing pod 67d879d9-c2b3-4d91-8855-4f6007f01c6e ("dashboard-metrics-scraper-8d5bb5db8-kgs2g_kubernetes-dashboard(67d879d9-c2b3-4d91-8855-4f6007f01c6e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kgs2g_kubernetes-dashboard(67d879d9-c2b3-4d91-8855-4f6007f01c6e)"
	W0819 12:25:14.230251  501046 logs.go:138] Found kubelet problem: Aug 19 12:22:23 old-k8s-version-091610 kubelet[673]: E0819 12:22:23.769634     673 pod_workers.go:191] Error syncing pod 67d879d9-c2b3-4d91-8855-4f6007f01c6e ("dashboard-metrics-scraper-8d5bb5db8-kgs2g_kubernetes-dashboard(67d879d9-c2b3-4d91-8855-4f6007f01c6e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kgs2g_kubernetes-dashboard(67d879d9-c2b3-4d91-8855-4f6007f01c6e)"
	W0819 12:25:14.230434  501046 logs.go:138] Found kubelet problem: Aug 19 12:22:25 old-k8s-version-091610 kubelet[673]: E0819 12:22:25.770010     673 pod_workers.go:191] Error syncing pod 4085e2df-7e89-44a8-b234-c4b001bdff1d ("metrics-server-9975d5f86-zb7nt_kube-system(4085e2df-7e89-44a8-b234-c4b001bdff1d)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0819 12:25:14.230619  501046 logs.go:138] Found kubelet problem: Aug 19 12:22:36 old-k8s-version-091610 kubelet[673]: E0819 12:22:36.771414     673 pod_workers.go:191] Error syncing pod 4085e2df-7e89-44a8-b234-c4b001bdff1d ("metrics-server-9975d5f86-zb7nt_kube-system(4085e2df-7e89-44a8-b234-c4b001bdff1d)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0819 12:25:14.230951  501046 logs.go:138] Found kubelet problem: Aug 19 12:22:38 old-k8s-version-091610 kubelet[673]: E0819 12:22:38.769996     673 pod_workers.go:191] Error syncing pod 67d879d9-c2b3-4d91-8855-4f6007f01c6e ("dashboard-metrics-scraper-8d5bb5db8-kgs2g_kubernetes-dashboard(67d879d9-c2b3-4d91-8855-4f6007f01c6e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kgs2g_kubernetes-dashboard(67d879d9-c2b3-4d91-8855-4f6007f01c6e)"
	W0819 12:25:14.233455  501046 logs.go:138] Found kubelet problem: Aug 19 12:22:48 old-k8s-version-091610 kubelet[673]: E0819 12:22:48.780860     673 pod_workers.go:191] Error syncing pod 4085e2df-7e89-44a8-b234-c4b001bdff1d ("metrics-server-9975d5f86-zb7nt_kube-system(4085e2df-7e89-44a8-b234-c4b001bdff1d)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W0819 12:25:14.233794  501046 logs.go:138] Found kubelet problem: Aug 19 12:22:52 old-k8s-version-091610 kubelet[673]: E0819 12:22:52.770145     673 pod_workers.go:191] Error syncing pod 67d879d9-c2b3-4d91-8855-4f6007f01c6e ("dashboard-metrics-scraper-8d5bb5db8-kgs2g_kubernetes-dashboard(67d879d9-c2b3-4d91-8855-4f6007f01c6e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kgs2g_kubernetes-dashboard(67d879d9-c2b3-4d91-8855-4f6007f01c6e)"
	W0819 12:25:14.233987  501046 logs.go:138] Found kubelet problem: Aug 19 12:23:00 old-k8s-version-091610 kubelet[673]: E0819 12:23:00.770567     673 pod_workers.go:191] Error syncing pod 4085e2df-7e89-44a8-b234-c4b001bdff1d ("metrics-server-9975d5f86-zb7nt_kube-system(4085e2df-7e89-44a8-b234-c4b001bdff1d)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0819 12:25:14.234319  501046 logs.go:138] Found kubelet problem: Aug 19 12:23:07 old-k8s-version-091610 kubelet[673]: E0819 12:23:07.769578     673 pod_workers.go:191] Error syncing pod 67d879d9-c2b3-4d91-8855-4f6007f01c6e ("dashboard-metrics-scraper-8d5bb5db8-kgs2g_kubernetes-dashboard(67d879d9-c2b3-4d91-8855-4f6007f01c6e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kgs2g_kubernetes-dashboard(67d879d9-c2b3-4d91-8855-4f6007f01c6e)"
	W0819 12:25:14.234504  501046 logs.go:138] Found kubelet problem: Aug 19 12:23:12 old-k8s-version-091610 kubelet[673]: E0819 12:23:12.772710     673 pod_workers.go:191] Error syncing pod 4085e2df-7e89-44a8-b234-c4b001bdff1d ("metrics-server-9975d5f86-zb7nt_kube-system(4085e2df-7e89-44a8-b234-c4b001bdff1d)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0819 12:25:14.235096  501046 logs.go:138] Found kubelet problem: Aug 19 12:23:22 old-k8s-version-091610 kubelet[673]: E0819 12:23:22.733720     673 pod_workers.go:191] Error syncing pod 67d879d9-c2b3-4d91-8855-4f6007f01c6e ("dashboard-metrics-scraper-8d5bb5db8-kgs2g_kubernetes-dashboard(67d879d9-c2b3-4d91-8855-4f6007f01c6e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kgs2g_kubernetes-dashboard(67d879d9-c2b3-4d91-8855-4f6007f01c6e)"
	W0819 12:25:14.235282  501046 logs.go:138] Found kubelet problem: Aug 19 12:23:26 old-k8s-version-091610 kubelet[673]: E0819 12:23:26.770120     673 pod_workers.go:191] Error syncing pod 4085e2df-7e89-44a8-b234-c4b001bdff1d ("metrics-server-9975d5f86-zb7nt_kube-system(4085e2df-7e89-44a8-b234-c4b001bdff1d)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0819 12:25:14.235611  501046 logs.go:138] Found kubelet problem: Aug 19 12:23:28 old-k8s-version-091610 kubelet[673]: E0819 12:23:28.503831     673 pod_workers.go:191] Error syncing pod 67d879d9-c2b3-4d91-8855-4f6007f01c6e ("dashboard-metrics-scraper-8d5bb5db8-kgs2g_kubernetes-dashboard(67d879d9-c2b3-4d91-8855-4f6007f01c6e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kgs2g_kubernetes-dashboard(67d879d9-c2b3-4d91-8855-4f6007f01c6e)"
	W0819 12:25:14.235799  501046 logs.go:138] Found kubelet problem: Aug 19 12:23:38 old-k8s-version-091610 kubelet[673]: E0819 12:23:38.770038     673 pod_workers.go:191] Error syncing pod 4085e2df-7e89-44a8-b234-c4b001bdff1d ("metrics-server-9975d5f86-zb7nt_kube-system(4085e2df-7e89-44a8-b234-c4b001bdff1d)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0819 12:25:14.236126  501046 logs.go:138] Found kubelet problem: Aug 19 12:23:42 old-k8s-version-091610 kubelet[673]: E0819 12:23:42.769697     673 pod_workers.go:191] Error syncing pod 67d879d9-c2b3-4d91-8855-4f6007f01c6e ("dashboard-metrics-scraper-8d5bb5db8-kgs2g_kubernetes-dashboard(67d879d9-c2b3-4d91-8855-4f6007f01c6e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kgs2g_kubernetes-dashboard(67d879d9-c2b3-4d91-8855-4f6007f01c6e)"
	W0819 12:25:14.236311  501046 logs.go:138] Found kubelet problem: Aug 19 12:23:50 old-k8s-version-091610 kubelet[673]: E0819 12:23:50.774063     673 pod_workers.go:191] Error syncing pod 4085e2df-7e89-44a8-b234-c4b001bdff1d ("metrics-server-9975d5f86-zb7nt_kube-system(4085e2df-7e89-44a8-b234-c4b001bdff1d)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0819 12:25:14.236639  501046 logs.go:138] Found kubelet problem: Aug 19 12:23:57 old-k8s-version-091610 kubelet[673]: E0819 12:23:57.769849     673 pod_workers.go:191] Error syncing pod 67d879d9-c2b3-4d91-8855-4f6007f01c6e ("dashboard-metrics-scraper-8d5bb5db8-kgs2g_kubernetes-dashboard(67d879d9-c2b3-4d91-8855-4f6007f01c6e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kgs2g_kubernetes-dashboard(67d879d9-c2b3-4d91-8855-4f6007f01c6e)"
	W0819 12:25:14.236828  501046 logs.go:138] Found kubelet problem: Aug 19 12:24:03 old-k8s-version-091610 kubelet[673]: E0819 12:24:03.769943     673 pod_workers.go:191] Error syncing pod 4085e2df-7e89-44a8-b234-c4b001bdff1d ("metrics-server-9975d5f86-zb7nt_kube-system(4085e2df-7e89-44a8-b234-c4b001bdff1d)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0819 12:25:14.237153  501046 logs.go:138] Found kubelet problem: Aug 19 12:24:12 old-k8s-version-091610 kubelet[673]: E0819 12:24:12.770074     673 pod_workers.go:191] Error syncing pod 67d879d9-c2b3-4d91-8855-4f6007f01c6e ("dashboard-metrics-scraper-8d5bb5db8-kgs2g_kubernetes-dashboard(67d879d9-c2b3-4d91-8855-4f6007f01c6e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kgs2g_kubernetes-dashboard(67d879d9-c2b3-4d91-8855-4f6007f01c6e)"
	W0819 12:25:14.237337  501046 logs.go:138] Found kubelet problem: Aug 19 12:24:18 old-k8s-version-091610 kubelet[673]: E0819 12:24:18.770049     673 pod_workers.go:191] Error syncing pod 4085e2df-7e89-44a8-b234-c4b001bdff1d ("metrics-server-9975d5f86-zb7nt_kube-system(4085e2df-7e89-44a8-b234-c4b001bdff1d)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0819 12:25:14.237665  501046 logs.go:138] Found kubelet problem: Aug 19 12:24:27 old-k8s-version-091610 kubelet[673]: E0819 12:24:27.769551     673 pod_workers.go:191] Error syncing pod 67d879d9-c2b3-4d91-8855-4f6007f01c6e ("dashboard-metrics-scraper-8d5bb5db8-kgs2g_kubernetes-dashboard(67d879d9-c2b3-4d91-8855-4f6007f01c6e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kgs2g_kubernetes-dashboard(67d879d9-c2b3-4d91-8855-4f6007f01c6e)"
	W0819 12:25:14.237851  501046 logs.go:138] Found kubelet problem: Aug 19 12:24:32 old-k8s-version-091610 kubelet[673]: E0819 12:24:32.769991     673 pod_workers.go:191] Error syncing pod 4085e2df-7e89-44a8-b234-c4b001bdff1d ("metrics-server-9975d5f86-zb7nt_kube-system(4085e2df-7e89-44a8-b234-c4b001bdff1d)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0819 12:25:14.238178  501046 logs.go:138] Found kubelet problem: Aug 19 12:24:39 old-k8s-version-091610 kubelet[673]: E0819 12:24:39.769647     673 pod_workers.go:191] Error syncing pod 67d879d9-c2b3-4d91-8855-4f6007f01c6e ("dashboard-metrics-scraper-8d5bb5db8-kgs2g_kubernetes-dashboard(67d879d9-c2b3-4d91-8855-4f6007f01c6e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kgs2g_kubernetes-dashboard(67d879d9-c2b3-4d91-8855-4f6007f01c6e)"
	W0819 12:25:14.238362  501046 logs.go:138] Found kubelet problem: Aug 19 12:24:47 old-k8s-version-091610 kubelet[673]: E0819 12:24:47.769929     673 pod_workers.go:191] Error syncing pod 4085e2df-7e89-44a8-b234-c4b001bdff1d ("metrics-server-9975d5f86-zb7nt_kube-system(4085e2df-7e89-44a8-b234-c4b001bdff1d)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0819 12:25:14.238686  501046 logs.go:138] Found kubelet problem: Aug 19 12:24:52 old-k8s-version-091610 kubelet[673]: E0819 12:24:52.769598     673 pod_workers.go:191] Error syncing pod 67d879d9-c2b3-4d91-8855-4f6007f01c6e ("dashboard-metrics-scraper-8d5bb5db8-kgs2g_kubernetes-dashboard(67d879d9-c2b3-4d91-8855-4f6007f01c6e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kgs2g_kubernetes-dashboard(67d879d9-c2b3-4d91-8855-4f6007f01c6e)"
	W0819 12:25:14.238869  501046 logs.go:138] Found kubelet problem: Aug 19 12:25:02 old-k8s-version-091610 kubelet[673]: E0819 12:25:02.770111     673 pod_workers.go:191] Error syncing pod 4085e2df-7e89-44a8-b234-c4b001bdff1d ("metrics-server-9975d5f86-zb7nt_kube-system(4085e2df-7e89-44a8-b234-c4b001bdff1d)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0819 12:25:14.239200  501046 logs.go:138] Found kubelet problem: Aug 19 12:25:07 old-k8s-version-091610 kubelet[673]: E0819 12:25:07.769696     673 pod_workers.go:191] Error syncing pod 67d879d9-c2b3-4d91-8855-4f6007f01c6e ("dashboard-metrics-scraper-8d5bb5db8-kgs2g_kubernetes-dashboard(67d879d9-c2b3-4d91-8855-4f6007f01c6e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kgs2g_kubernetes-dashboard(67d879d9-c2b3-4d91-8855-4f6007f01c6e)"
	I0819 12:25:14.239212  501046 logs.go:123] Gathering logs for kube-scheduler [1ef419f5f067970c35ce55d03d0e7a36fdebdd452881d8f478b7e537af217a32] ...
	I0819 12:25:14.239227  501046 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1ef419f5f067970c35ce55d03d0e7a36fdebdd452881d8f478b7e537af217a32"
	I0819 12:25:14.281119  501046 logs.go:123] Gathering logs for kube-proxy [495863fa417577517a6659a9363e132d473fde25375c55ba292884732c5b5cc6] ...
	I0819 12:25:14.281158  501046 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 495863fa417577517a6659a9363e132d473fde25375c55ba292884732c5b5cc6"
	I0819 12:25:14.333451  501046 logs.go:123] Gathering logs for kindnet [312b3b2145bf1c233ccc80613e0c67129fa905992c3e22c9a71929f05341b98f] ...
	I0819 12:25:14.333481  501046 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 312b3b2145bf1c233ccc80613e0c67129fa905992c3e22c9a71929f05341b98f"
	I0819 12:25:14.397535  501046 logs.go:123] Gathering logs for kindnet [ff792901aeab0bc28facb26abfa78879ae3a7f0e523d2f2d7a83d2138d80c10b] ...
	I0819 12:25:14.397569  501046 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ff792901aeab0bc28facb26abfa78879ae3a7f0e523d2f2d7a83d2138d80c10b"
	I0819 12:25:14.462615  501046 logs.go:123] Gathering logs for kubernetes-dashboard [e333f18f594f1e8bfc8059d2d10fca8e2977c7d6931e0738413013bc0a844e60] ...
	I0819 12:25:14.462649  501046 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e333f18f594f1e8bfc8059d2d10fca8e2977c7d6931e0738413013bc0a844e60"
	I0819 12:25:14.509982  501046 logs.go:123] Gathering logs for storage-provisioner [7a63c07299e71c1920b148c4c2cd68ce0fc64d5359eb08ebc374e073275d266b] ...
	I0819 12:25:14.510040  501046 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7a63c07299e71c1920b148c4c2cd68ce0fc64d5359eb08ebc374e073275d266b"
	I0819 12:25:14.550272  501046 logs.go:123] Gathering logs for containerd ...
	I0819 12:25:14.550299  501046 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0819 12:25:14.612323  501046 logs.go:123] Gathering logs for dmesg ...
	I0819 12:25:14.612358  501046 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 12:25:14.632096  501046 logs.go:123] Gathering logs for describe nodes ...
	I0819 12:25:14.632172  501046 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 12:25:14.809269  501046 logs.go:123] Gathering logs for coredns [a9d449177d2f20bb4e0279df0064dc677dffa2194c0bfd6deb8af6688e466210] ...
	I0819 12:25:14.809356  501046 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a9d449177d2f20bb4e0279df0064dc677dffa2194c0bfd6deb8af6688e466210"
	I0819 12:25:14.866213  501046 logs.go:123] Gathering logs for coredns [52772681d7f9b14520c48d8c42d715a82c430226dad2db97bca20cde5180797d] ...
	I0819 12:25:14.866243  501046 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 52772681d7f9b14520c48d8c42d715a82c430226dad2db97bca20cde5180797d"
	I0819 12:25:14.907317  501046 logs.go:123] Gathering logs for kube-controller-manager [ff22b0055b8eb8b604f52a3bf6c5df44fc6b0ff5546a72d4f2b20b85080af524] ...
	I0819 12:25:14.907344  501046 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ff22b0055b8eb8b604f52a3bf6c5df44fc6b0ff5546a72d4f2b20b85080af524"
	I0819 12:25:14.970762  501046 logs.go:123] Gathering logs for kube-controller-manager [b8c6ba6c65d67f62a14421151c3013537c37cfcf1bc0b08d90d27bda4241f566] ...
	I0819 12:25:14.970797  501046 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b8c6ba6c65d67f62a14421151c3013537c37cfcf1bc0b08d90d27bda4241f566"
	I0819 12:25:15.044244  501046 out.go:358] Setting ErrFile to fd 2...
	I0819 12:25:15.045762  501046 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0819 12:25:15.045885  501046 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0819 12:25:15.056894  501046 out.go:270]   Aug 19 12:24:39 old-k8s-version-091610 kubelet[673]: E0819 12:24:39.769647     673 pod_workers.go:191] Error syncing pod 67d879d9-c2b3-4d91-8855-4f6007f01c6e ("dashboard-metrics-scraper-8d5bb5db8-kgs2g_kubernetes-dashboard(67d879d9-c2b3-4d91-8855-4f6007f01c6e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kgs2g_kubernetes-dashboard(67d879d9-c2b3-4d91-8855-4f6007f01c6e)"
	  Aug 19 12:24:39 old-k8s-version-091610 kubelet[673]: E0819 12:24:39.769647     673 pod_workers.go:191] Error syncing pod 67d879d9-c2b3-4d91-8855-4f6007f01c6e ("dashboard-metrics-scraper-8d5bb5db8-kgs2g_kubernetes-dashboard(67d879d9-c2b3-4d91-8855-4f6007f01c6e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kgs2g_kubernetes-dashboard(67d879d9-c2b3-4d91-8855-4f6007f01c6e)"
	W0819 12:25:15.057167  501046 out.go:270]   Aug 19 12:24:47 old-k8s-version-091610 kubelet[673]: E0819 12:24:47.769929     673 pod_workers.go:191] Error syncing pod 4085e2df-7e89-44a8-b234-c4b001bdff1d ("metrics-server-9975d5f86-zb7nt_kube-system(4085e2df-7e89-44a8-b234-c4b001bdff1d)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	  Aug 19 12:24:47 old-k8s-version-091610 kubelet[673]: E0819 12:24:47.769929     673 pod_workers.go:191] Error syncing pod 4085e2df-7e89-44a8-b234-c4b001bdff1d ("metrics-server-9975d5f86-zb7nt_kube-system(4085e2df-7e89-44a8-b234-c4b001bdff1d)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0819 12:25:15.057177  501046 out.go:270]   Aug 19 12:24:52 old-k8s-version-091610 kubelet[673]: E0819 12:24:52.769598     673 pod_workers.go:191] Error syncing pod 67d879d9-c2b3-4d91-8855-4f6007f01c6e ("dashboard-metrics-scraper-8d5bb5db8-kgs2g_kubernetes-dashboard(67d879d9-c2b3-4d91-8855-4f6007f01c6e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kgs2g_kubernetes-dashboard(67d879d9-c2b3-4d91-8855-4f6007f01c6e)"
	  Aug 19 12:24:52 old-k8s-version-091610 kubelet[673]: E0819 12:24:52.769598     673 pod_workers.go:191] Error syncing pod 67d879d9-c2b3-4d91-8855-4f6007f01c6e ("dashboard-metrics-scraper-8d5bb5db8-kgs2g_kubernetes-dashboard(67d879d9-c2b3-4d91-8855-4f6007f01c6e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kgs2g_kubernetes-dashboard(67d879d9-c2b3-4d91-8855-4f6007f01c6e)"
	W0819 12:25:15.057184  501046 out.go:270]   Aug 19 12:25:02 old-k8s-version-091610 kubelet[673]: E0819 12:25:02.770111     673 pod_workers.go:191] Error syncing pod 4085e2df-7e89-44a8-b234-c4b001bdff1d ("metrics-server-9975d5f86-zb7nt_kube-system(4085e2df-7e89-44a8-b234-c4b001bdff1d)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	  Aug 19 12:25:02 old-k8s-version-091610 kubelet[673]: E0819 12:25:02.770111     673 pod_workers.go:191] Error syncing pod 4085e2df-7e89-44a8-b234-c4b001bdff1d ("metrics-server-9975d5f86-zb7nt_kube-system(4085e2df-7e89-44a8-b234-c4b001bdff1d)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0819 12:25:15.057189  501046 out.go:270]   Aug 19 12:25:07 old-k8s-version-091610 kubelet[673]: E0819 12:25:07.769696     673 pod_workers.go:191] Error syncing pod 67d879d9-c2b3-4d91-8855-4f6007f01c6e ("dashboard-metrics-scraper-8d5bb5db8-kgs2g_kubernetes-dashboard(67d879d9-c2b3-4d91-8855-4f6007f01c6e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kgs2g_kubernetes-dashboard(67d879d9-c2b3-4d91-8855-4f6007f01c6e)"
	  Aug 19 12:25:07 old-k8s-version-091610 kubelet[673]: E0819 12:25:07.769696     673 pod_workers.go:191] Error syncing pod 67d879d9-c2b3-4d91-8855-4f6007f01c6e ("dashboard-metrics-scraper-8d5bb5db8-kgs2g_kubernetes-dashboard(67d879d9-c2b3-4d91-8855-4f6007f01c6e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kgs2g_kubernetes-dashboard(67d879d9-c2b3-4d91-8855-4f6007f01c6e)"
	I0819 12:25:15.057204  501046 out.go:358] Setting ErrFile to fd 2...
	I0819 12:25:15.057276  501046 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 12:25:25.058776  501046 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 12:25:25.072234  501046 api_server.go:72] duration metric: took 5m52.234152061s to wait for apiserver process to appear ...
	I0819 12:25:25.072260  501046 api_server.go:88] waiting for apiserver healthz status ...
	I0819 12:25:25.072299  501046 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0819 12:25:25.072360  501046 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 12:25:25.126268  501046 cri.go:89] found id: "4ce61d87754c1a74c50315141a3956f04c053f6e9bf8ed92eb2f1d41f61bac61"
	I0819 12:25:25.126304  501046 cri.go:89] found id: "448906379c25acacfbe73890ef79d2faf13a76a8f18880099fda6187c53b0152"
	I0819 12:25:25.126309  501046 cri.go:89] found id: ""
	I0819 12:25:25.126317  501046 logs.go:276] 2 containers: [4ce61d87754c1a74c50315141a3956f04c053f6e9bf8ed92eb2f1d41f61bac61 448906379c25acacfbe73890ef79d2faf13a76a8f18880099fda6187c53b0152]
	I0819 12:25:25.126384  501046 ssh_runner.go:195] Run: which crictl
	I0819 12:25:25.130959  501046 ssh_runner.go:195] Run: which crictl
	I0819 12:25:25.135147  501046 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0819 12:25:25.135222  501046 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 12:25:25.179964  501046 cri.go:89] found id: "f2b278acf70fb649b3b13d726b38e5b951b7950e84be6176979a50e06c284635"
	I0819 12:25:25.179989  501046 cri.go:89] found id: "f96bb26d0d9fd54167ea4eedbda32851e7e6ed986c5d18edddbfb9d015c80aa8"
	I0819 12:25:25.179995  501046 cri.go:89] found id: ""
	I0819 12:25:25.180003  501046 logs.go:276] 2 containers: [f2b278acf70fb649b3b13d726b38e5b951b7950e84be6176979a50e06c284635 f96bb26d0d9fd54167ea4eedbda32851e7e6ed986c5d18edddbfb9d015c80aa8]
	I0819 12:25:25.180069  501046 ssh_runner.go:195] Run: which crictl
	I0819 12:25:25.184388  501046 ssh_runner.go:195] Run: which crictl
	I0819 12:25:25.188376  501046 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0819 12:25:25.188454  501046 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 12:25:25.236043  501046 cri.go:89] found id: "a9d449177d2f20bb4e0279df0064dc677dffa2194c0bfd6deb8af6688e466210"
	I0819 12:25:25.236069  501046 cri.go:89] found id: "52772681d7f9b14520c48d8c42d715a82c430226dad2db97bca20cde5180797d"
	I0819 12:25:25.236076  501046 cri.go:89] found id: ""
	I0819 12:25:25.236084  501046 logs.go:276] 2 containers: [a9d449177d2f20bb4e0279df0064dc677dffa2194c0bfd6deb8af6688e466210 52772681d7f9b14520c48d8c42d715a82c430226dad2db97bca20cde5180797d]
	I0819 12:25:25.236146  501046 ssh_runner.go:195] Run: which crictl
	I0819 12:25:25.239980  501046 ssh_runner.go:195] Run: which crictl
	I0819 12:25:25.243901  501046 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0819 12:25:25.243981  501046 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 12:25:25.289379  501046 cri.go:89] found id: "309ceea1b6362201d696baaee9f77608d461fce74f091280d398876fad125712"
	I0819 12:25:25.289399  501046 cri.go:89] found id: "1ef419f5f067970c35ce55d03d0e7a36fdebdd452881d8f478b7e537af217a32"
	I0819 12:25:25.289404  501046 cri.go:89] found id: ""
	I0819 12:25:25.289411  501046 logs.go:276] 2 containers: [309ceea1b6362201d696baaee9f77608d461fce74f091280d398876fad125712 1ef419f5f067970c35ce55d03d0e7a36fdebdd452881d8f478b7e537af217a32]
	I0819 12:25:25.289473  501046 ssh_runner.go:195] Run: which crictl
	I0819 12:25:25.293422  501046 ssh_runner.go:195] Run: which crictl
	I0819 12:25:25.297200  501046 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0819 12:25:25.297273  501046 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 12:25:25.355487  501046 cri.go:89] found id: "8581310ffb6da62d730c00416e9d418c1fd194d0459e551c98e90cd0193dc9be"
	I0819 12:25:25.355511  501046 cri.go:89] found id: "495863fa417577517a6659a9363e132d473fde25375c55ba292884732c5b5cc6"
	I0819 12:25:25.355516  501046 cri.go:89] found id: ""
	I0819 12:25:25.355523  501046 logs.go:276] 2 containers: [8581310ffb6da62d730c00416e9d418c1fd194d0459e551c98e90cd0193dc9be 495863fa417577517a6659a9363e132d473fde25375c55ba292884732c5b5cc6]
	I0819 12:25:25.355580  501046 ssh_runner.go:195] Run: which crictl
	I0819 12:25:25.359673  501046 ssh_runner.go:195] Run: which crictl
	I0819 12:25:25.363763  501046 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 12:25:25.363845  501046 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 12:25:25.407918  501046 cri.go:89] found id: "ff22b0055b8eb8b604f52a3bf6c5df44fc6b0ff5546a72d4f2b20b85080af524"
	I0819 12:25:25.407998  501046 cri.go:89] found id: "b8c6ba6c65d67f62a14421151c3013537c37cfcf1bc0b08d90d27bda4241f566"
	I0819 12:25:25.408009  501046 cri.go:89] found id: ""
	I0819 12:25:25.408017  501046 logs.go:276] 2 containers: [ff22b0055b8eb8b604f52a3bf6c5df44fc6b0ff5546a72d4f2b20b85080af524 b8c6ba6c65d67f62a14421151c3013537c37cfcf1bc0b08d90d27bda4241f566]
	I0819 12:25:25.408087  501046 ssh_runner.go:195] Run: which crictl
	I0819 12:25:25.412111  501046 ssh_runner.go:195] Run: which crictl
	I0819 12:25:25.416354  501046 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0819 12:25:25.416467  501046 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 12:25:25.465412  501046 cri.go:89] found id: "312b3b2145bf1c233ccc80613e0c67129fa905992c3e22c9a71929f05341b98f"
	I0819 12:25:25.465436  501046 cri.go:89] found id: "ff792901aeab0bc28facb26abfa78879ae3a7f0e523d2f2d7a83d2138d80c10b"
	I0819 12:25:25.465441  501046 cri.go:89] found id: ""
	I0819 12:25:25.465449  501046 logs.go:276] 2 containers: [312b3b2145bf1c233ccc80613e0c67129fa905992c3e22c9a71929f05341b98f ff792901aeab0bc28facb26abfa78879ae3a7f0e523d2f2d7a83d2138d80c10b]
	I0819 12:25:25.465538  501046 ssh_runner.go:195] Run: which crictl
	I0819 12:25:25.469407  501046 ssh_runner.go:195] Run: which crictl
	I0819 12:25:25.473071  501046 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0819 12:25:25.473191  501046 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0819 12:25:25.521059  501046 cri.go:89] found id: "d6ba97b27a6fcb89c2d05f135c98e06be837786617de7736914f6711ab33c929"
	I0819 12:25:25.521081  501046 cri.go:89] found id: "7a63c07299e71c1920b148c4c2cd68ce0fc64d5359eb08ebc374e073275d266b"
	I0819 12:25:25.521086  501046 cri.go:89] found id: ""
	I0819 12:25:25.521094  501046 logs.go:276] 2 containers: [d6ba97b27a6fcb89c2d05f135c98e06be837786617de7736914f6711ab33c929 7a63c07299e71c1920b148c4c2cd68ce0fc64d5359eb08ebc374e073275d266b]
	I0819 12:25:25.521154  501046 ssh_runner.go:195] Run: which crictl
	I0819 12:25:25.525152  501046 ssh_runner.go:195] Run: which crictl
	I0819 12:25:25.528765  501046 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 12:25:25.528852  501046 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 12:25:25.568893  501046 cri.go:89] found id: "e333f18f594f1e8bfc8059d2d10fca8e2977c7d6931e0738413013bc0a844e60"
	I0819 12:25:25.568965  501046 cri.go:89] found id: ""
	I0819 12:25:25.568986  501046 logs.go:276] 1 containers: [e333f18f594f1e8bfc8059d2d10fca8e2977c7d6931e0738413013bc0a844e60]
	I0819 12:25:25.569076  501046 ssh_runner.go:195] Run: which crictl
	I0819 12:25:25.573043  501046 logs.go:123] Gathering logs for kube-scheduler [1ef419f5f067970c35ce55d03d0e7a36fdebdd452881d8f478b7e537af217a32] ...
	I0819 12:25:25.573094  501046 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1ef419f5f067970c35ce55d03d0e7a36fdebdd452881d8f478b7e537af217a32"
	I0819 12:25:25.620654  501046 logs.go:123] Gathering logs for kindnet [312b3b2145bf1c233ccc80613e0c67129fa905992c3e22c9a71929f05341b98f] ...
	I0819 12:25:25.620685  501046 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 312b3b2145bf1c233ccc80613e0c67129fa905992c3e22c9a71929f05341b98f"
	I0819 12:25:25.686239  501046 logs.go:123] Gathering logs for kindnet [ff792901aeab0bc28facb26abfa78879ae3a7f0e523d2f2d7a83d2138d80c10b] ...
	I0819 12:25:25.686276  501046 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ff792901aeab0bc28facb26abfa78879ae3a7f0e523d2f2d7a83d2138d80c10b"
	I0819 12:25:25.736109  501046 logs.go:123] Gathering logs for describe nodes ...
	I0819 12:25:25.736143  501046 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 12:25:25.900489  501046 logs.go:123] Gathering logs for kube-apiserver [4ce61d87754c1a74c50315141a3956f04c053f6e9bf8ed92eb2f1d41f61bac61] ...
	I0819 12:25:25.900519  501046 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4ce61d87754c1a74c50315141a3956f04c053f6e9bf8ed92eb2f1d41f61bac61"
	I0819 12:25:25.972186  501046 logs.go:123] Gathering logs for etcd [f2b278acf70fb649b3b13d726b38e5b951b7950e84be6176979a50e06c284635] ...
	I0819 12:25:25.972220  501046 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f2b278acf70fb649b3b13d726b38e5b951b7950e84be6176979a50e06c284635"
	I0819 12:25:26.029587  501046 logs.go:123] Gathering logs for coredns [52772681d7f9b14520c48d8c42d715a82c430226dad2db97bca20cde5180797d] ...
	I0819 12:25:26.029664  501046 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 52772681d7f9b14520c48d8c42d715a82c430226dad2db97bca20cde5180797d"
	I0819 12:25:26.074199  501046 logs.go:123] Gathering logs for kube-scheduler [309ceea1b6362201d696baaee9f77608d461fce74f091280d398876fad125712] ...
	I0819 12:25:26.074229  501046 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 309ceea1b6362201d696baaee9f77608d461fce74f091280d398876fad125712"
	I0819 12:25:26.117377  501046 logs.go:123] Gathering logs for kubernetes-dashboard [e333f18f594f1e8bfc8059d2d10fca8e2977c7d6931e0738413013bc0a844e60] ...
	I0819 12:25:26.117408  501046 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e333f18f594f1e8bfc8059d2d10fca8e2977c7d6931e0738413013bc0a844e60"
	I0819 12:25:26.156566  501046 logs.go:123] Gathering logs for dmesg ...
	I0819 12:25:26.156593  501046 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 12:25:26.173495  501046 logs.go:123] Gathering logs for kube-apiserver [448906379c25acacfbe73890ef79d2faf13a76a8f18880099fda6187c53b0152] ...
	I0819 12:25:26.173522  501046 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 448906379c25acacfbe73890ef79d2faf13a76a8f18880099fda6187c53b0152"
	I0819 12:25:26.254028  501046 logs.go:123] Gathering logs for coredns [a9d449177d2f20bb4e0279df0064dc677dffa2194c0bfd6deb8af6688e466210] ...
	I0819 12:25:26.254063  501046 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a9d449177d2f20bb4e0279df0064dc677dffa2194c0bfd6deb8af6688e466210"
	I0819 12:25:26.297887  501046 logs.go:123] Gathering logs for kube-controller-manager [ff22b0055b8eb8b604f52a3bf6c5df44fc6b0ff5546a72d4f2b20b85080af524] ...
	I0819 12:25:26.297918  501046 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ff22b0055b8eb8b604f52a3bf6c5df44fc6b0ff5546a72d4f2b20b85080af524"
	I0819 12:25:26.375578  501046 logs.go:123] Gathering logs for kube-controller-manager [b8c6ba6c65d67f62a14421151c3013537c37cfcf1bc0b08d90d27bda4241f566] ...
	I0819 12:25:26.375616  501046 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b8c6ba6c65d67f62a14421151c3013537c37cfcf1bc0b08d90d27bda4241f566"
	I0819 12:25:26.464701  501046 logs.go:123] Gathering logs for kubelet ...
	I0819 12:25:26.464745  501046 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0819 12:25:26.519991  501046 logs.go:138] Found kubelet problem: Aug 19 12:19:48 old-k8s-version-091610 kubelet[673]: E0819 12:19:48.865111     673 reflector.go:138] object-"default"/"default-token-ddbn8": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "default-token-ddbn8" is forbidden: User "system:node:old-k8s-version-091610" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'old-k8s-version-091610' and this object
	W0819 12:25:26.520228  501046 logs.go:138] Found kubelet problem: Aug 19 12:19:48 old-k8s-version-091610 kubelet[673]: E0819 12:19:48.865470     673 reflector.go:138] object-"kube-system"/"coredns-token-24w5r": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "coredns-token-24w5r" is forbidden: User "system:node:old-k8s-version-091610" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-091610' and this object
	W0819 12:25:26.520440  501046 logs.go:138] Found kubelet problem: Aug 19 12:19:48 old-k8s-version-091610 kubelet[673]: E0819 12:19:48.865674     673 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:old-k8s-version-091610" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-091610' and this object
	W0819 12:25:26.520655  501046 logs.go:138] Found kubelet problem: Aug 19 12:19:48 old-k8s-version-091610 kubelet[673]: E0819 12:19:48.865769     673 reflector.go:138] object-"kube-system"/"kindnet-token-45phz": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kindnet-token-45phz" is forbidden: User "system:node:old-k8s-version-091610" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-091610' and this object
	W0819 12:25:26.520872  501046 logs.go:138] Found kubelet problem: Aug 19 12:19:48 old-k8s-version-091610 kubelet[673]: E0819 12:19:48.866431     673 reflector.go:138] object-"kube-system"/"kube-proxy-token-6m5lt": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kube-proxy-token-6m5lt" is forbidden: User "system:node:old-k8s-version-091610" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-091610' and this object
	W0819 12:25:26.521102  501046 logs.go:138] Found kubelet problem: Aug 19 12:19:48 old-k8s-version-091610 kubelet[673]: E0819 12:19:48.866509     673 reflector.go:138] object-"kube-system"/"storage-provisioner-token-lvtph": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "storage-provisioner-token-lvtph" is forbidden: User "system:node:old-k8s-version-091610" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-091610' and this object
	W0819 12:25:26.521341  501046 logs.go:138] Found kubelet problem: Aug 19 12:19:48 old-k8s-version-091610 kubelet[673]: E0819 12:19:48.866580     673 reflector.go:138] object-"kube-system"/"metrics-server-token-hgch9": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "metrics-server-token-hgch9" is forbidden: User "system:node:old-k8s-version-091610" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-091610' and this object
	W0819 12:25:26.521548  501046 logs.go:138] Found kubelet problem: Aug 19 12:19:48 old-k8s-version-091610 kubelet[673]: E0819 12:19:48.866601     673 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:old-k8s-version-091610" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-091610' and this object
	W0819 12:25:26.529491  501046 logs.go:138] Found kubelet problem: Aug 19 12:19:51 old-k8s-version-091610 kubelet[673]: E0819 12:19:51.439458     673 pod_workers.go:191] Error syncing pod 4085e2df-7e89-44a8-b234-c4b001bdff1d ("metrics-server-9975d5f86-zb7nt_kube-system(4085e2df-7e89-44a8-b234-c4b001bdff1d)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W0819 12:25:26.531105  501046 logs.go:138] Found kubelet problem: Aug 19 12:19:51 old-k8s-version-091610 kubelet[673]: E0819 12:19:51.960458     673 pod_workers.go:191] Error syncing pod 4085e2df-7e89-44a8-b234-c4b001bdff1d ("metrics-server-9975d5f86-zb7nt_kube-system(4085e2df-7e89-44a8-b234-c4b001bdff1d)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0819 12:25:26.533946  501046 logs.go:138] Found kubelet problem: Aug 19 12:20:04 old-k8s-version-091610 kubelet[673]: E0819 12:20:04.809492     673 pod_workers.go:191] Error syncing pod 4085e2df-7e89-44a8-b234-c4b001bdff1d ("metrics-server-9975d5f86-zb7nt_kube-system(4085e2df-7e89-44a8-b234-c4b001bdff1d)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W0819 12:25:26.535831  501046 logs.go:138] Found kubelet problem: Aug 19 12:20:15 old-k8s-version-091610 kubelet[673]: E0819 12:20:15.775810     673 pod_workers.go:191] Error syncing pod 4085e2df-7e89-44a8-b234-c4b001bdff1d ("metrics-server-9975d5f86-zb7nt_kube-system(4085e2df-7e89-44a8-b234-c4b001bdff1d)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0819 12:25:26.536295  501046 logs.go:138] Found kubelet problem: Aug 19 12:20:16 old-k8s-version-091610 kubelet[673]: E0819 12:20:16.146247     673 pod_workers.go:191] Error syncing pod 67d879d9-c2b3-4d91-8855-4f6007f01c6e ("dashboard-metrics-scraper-8d5bb5db8-kgs2g_kubernetes-dashboard(67d879d9-c2b3-4d91-8855-4f6007f01c6e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kgs2g_kubernetes-dashboard(67d879d9-c2b3-4d91-8855-4f6007f01c6e)"
	W0819 12:25:26.536631  501046 logs.go:138] Found kubelet problem: Aug 19 12:20:17 old-k8s-version-091610 kubelet[673]: E0819 12:20:17.149552     673 pod_workers.go:191] Error syncing pod 67d879d9-c2b3-4d91-8855-4f6007f01c6e ("dashboard-metrics-scraper-8d5bb5db8-kgs2g_kubernetes-dashboard(67d879d9-c2b3-4d91-8855-4f6007f01c6e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kgs2g_kubernetes-dashboard(67d879d9-c2b3-4d91-8855-4f6007f01c6e)"
	W0819 12:25:26.536963  501046 logs.go:138] Found kubelet problem: Aug 19 12:20:18 old-k8s-version-091610 kubelet[673]: E0819 12:20:18.503760     673 pod_workers.go:191] Error syncing pod 67d879d9-c2b3-4d91-8855-4f6007f01c6e ("dashboard-metrics-scraper-8d5bb5db8-kgs2g_kubernetes-dashboard(67d879d9-c2b3-4d91-8855-4f6007f01c6e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kgs2g_kubernetes-dashboard(67d879d9-c2b3-4d91-8855-4f6007f01c6e)"
	W0819 12:25:26.539794  501046 logs.go:138] Found kubelet problem: Aug 19 12:20:27 old-k8s-version-091610 kubelet[673]: E0819 12:20:27.783794     673 pod_workers.go:191] Error syncing pod 4085e2df-7e89-44a8-b234-c4b001bdff1d ("metrics-server-9975d5f86-zb7nt_kube-system(4085e2df-7e89-44a8-b234-c4b001bdff1d)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W0819 12:25:26.540740  501046 logs.go:138] Found kubelet problem: Aug 19 12:20:34 old-k8s-version-091610 kubelet[673]: E0819 12:20:34.224236     673 pod_workers.go:191] Error syncing pod 67d879d9-c2b3-4d91-8855-4f6007f01c6e ("dashboard-metrics-scraper-8d5bb5db8-kgs2g_kubernetes-dashboard(67d879d9-c2b3-4d91-8855-4f6007f01c6e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kgs2g_kubernetes-dashboard(67d879d9-c2b3-4d91-8855-4f6007f01c6e)"
	W0819 12:25:26.541073  501046 logs.go:138] Found kubelet problem: Aug 19 12:20:38 old-k8s-version-091610 kubelet[673]: E0819 12:20:38.503822     673 pod_workers.go:191] Error syncing pod 67d879d9-c2b3-4d91-8855-4f6007f01c6e ("dashboard-metrics-scraper-8d5bb5db8-kgs2g_kubernetes-dashboard(67d879d9-c2b3-4d91-8855-4f6007f01c6e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kgs2g_kubernetes-dashboard(67d879d9-c2b3-4d91-8855-4f6007f01c6e)"
	W0819 12:25:26.541260  501046 logs.go:138] Found kubelet problem: Aug 19 12:20:41 old-k8s-version-091610 kubelet[673]: E0819 12:20:41.779721     673 pod_workers.go:191] Error syncing pod 4085e2df-7e89-44a8-b234-c4b001bdff1d ("metrics-server-9975d5f86-zb7nt_kube-system(4085e2df-7e89-44a8-b234-c4b001bdff1d)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0819 12:25:26.541590  501046 logs.go:138] Found kubelet problem: Aug 19 12:20:49 old-k8s-version-091610 kubelet[673]: E0819 12:20:49.769540     673 pod_workers.go:191] Error syncing pod 67d879d9-c2b3-4d91-8855-4f6007f01c6e ("dashboard-metrics-scraper-8d5bb5db8-kgs2g_kubernetes-dashboard(67d879d9-c2b3-4d91-8855-4f6007f01c6e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kgs2g_kubernetes-dashboard(67d879d9-c2b3-4d91-8855-4f6007f01c6e)"
	W0819 12:25:26.541778  501046 logs.go:138] Found kubelet problem: Aug 19 12:20:53 old-k8s-version-091610 kubelet[673]: E0819 12:20:53.783516     673 pod_workers.go:191] Error syncing pod 4085e2df-7e89-44a8-b234-c4b001bdff1d ("metrics-server-9975d5f86-zb7nt_kube-system(4085e2df-7e89-44a8-b234-c4b001bdff1d)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0819 12:25:26.542378  501046 logs.go:138] Found kubelet problem: Aug 19 12:21:01 old-k8s-version-091610 kubelet[673]: E0819 12:21:01.379793     673 pod_workers.go:191] Error syncing pod 67d879d9-c2b3-4d91-8855-4f6007f01c6e ("dashboard-metrics-scraper-8d5bb5db8-kgs2g_kubernetes-dashboard(67d879d9-c2b3-4d91-8855-4f6007f01c6e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kgs2g_kubernetes-dashboard(67d879d9-c2b3-4d91-8855-4f6007f01c6e)"
	W0819 12:25:26.542565  501046 logs.go:138] Found kubelet problem: Aug 19 12:21:05 old-k8s-version-091610 kubelet[673]: E0819 12:21:05.770046     673 pod_workers.go:191] Error syncing pod 4085e2df-7e89-44a8-b234-c4b001bdff1d ("metrics-server-9975d5f86-zb7nt_kube-system(4085e2df-7e89-44a8-b234-c4b001bdff1d)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0819 12:25:26.542903  501046 logs.go:138] Found kubelet problem: Aug 19 12:21:08 old-k8s-version-091610 kubelet[673]: E0819 12:21:08.504303     673 pod_workers.go:191] Error syncing pod 67d879d9-c2b3-4d91-8855-4f6007f01c6e ("dashboard-metrics-scraper-8d5bb5db8-kgs2g_kubernetes-dashboard(67d879d9-c2b3-4d91-8855-4f6007f01c6e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kgs2g_kubernetes-dashboard(67d879d9-c2b3-4d91-8855-4f6007f01c6e)"
	W0819 12:25:26.545363  501046 logs.go:138] Found kubelet problem: Aug 19 12:21:19 old-k8s-version-091610 kubelet[673]: E0819 12:21:19.779165     673 pod_workers.go:191] Error syncing pod 4085e2df-7e89-44a8-b234-c4b001bdff1d ("metrics-server-9975d5f86-zb7nt_kube-system(4085e2df-7e89-44a8-b234-c4b001bdff1d)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W0819 12:25:26.545698  501046 logs.go:138] Found kubelet problem: Aug 19 12:21:22 old-k8s-version-091610 kubelet[673]: E0819 12:21:22.771924     673 pod_workers.go:191] Error syncing pod 67d879d9-c2b3-4d91-8855-4f6007f01c6e ("dashboard-metrics-scraper-8d5bb5db8-kgs2g_kubernetes-dashboard(67d879d9-c2b3-4d91-8855-4f6007f01c6e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kgs2g_kubernetes-dashboard(67d879d9-c2b3-4d91-8855-4f6007f01c6e)"
	W0819 12:25:26.545890  501046 logs.go:138] Found kubelet problem: Aug 19 12:21:31 old-k8s-version-091610 kubelet[673]: E0819 12:21:31.769891     673 pod_workers.go:191] Error syncing pod 4085e2df-7e89-44a8-b234-c4b001bdff1d ("metrics-server-9975d5f86-zb7nt_kube-system(4085e2df-7e89-44a8-b234-c4b001bdff1d)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0819 12:25:26.546222  501046 logs.go:138] Found kubelet problem: Aug 19 12:21:36 old-k8s-version-091610 kubelet[673]: E0819 12:21:36.770025     673 pod_workers.go:191] Error syncing pod 67d879d9-c2b3-4d91-8855-4f6007f01c6e ("dashboard-metrics-scraper-8d5bb5db8-kgs2g_kubernetes-dashboard(67d879d9-c2b3-4d91-8855-4f6007f01c6e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kgs2g_kubernetes-dashboard(67d879d9-c2b3-4d91-8855-4f6007f01c6e)"
	W0819 12:25:26.546434  501046 logs.go:138] Found kubelet problem: Aug 19 12:21:46 old-k8s-version-091610 kubelet[673]: E0819 12:21:46.779430     673 pod_workers.go:191] Error syncing pod 4085e2df-7e89-44a8-b234-c4b001bdff1d ("metrics-server-9975d5f86-zb7nt_kube-system(4085e2df-7e89-44a8-b234-c4b001bdff1d)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0819 12:25:26.547032  501046 logs.go:138] Found kubelet problem: Aug 19 12:21:48 old-k8s-version-091610 kubelet[673]: E0819 12:21:48.496386     673 pod_workers.go:191] Error syncing pod 67d879d9-c2b3-4d91-8855-4f6007f01c6e ("dashboard-metrics-scraper-8d5bb5db8-kgs2g_kubernetes-dashboard(67d879d9-c2b3-4d91-8855-4f6007f01c6e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kgs2g_kubernetes-dashboard(67d879d9-c2b3-4d91-8855-4f6007f01c6e)"
	W0819 12:25:26.547368  501046 logs.go:138] Found kubelet problem: Aug 19 12:21:49 old-k8s-version-091610 kubelet[673]: E0819 12:21:49.499775     673 pod_workers.go:191] Error syncing pod 67d879d9-c2b3-4d91-8855-4f6007f01c6e ("dashboard-metrics-scraper-8d5bb5db8-kgs2g_kubernetes-dashboard(67d879d9-c2b3-4d91-8855-4f6007f01c6e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kgs2g_kubernetes-dashboard(67d879d9-c2b3-4d91-8855-4f6007f01c6e)"
	W0819 12:25:26.547557  501046 logs.go:138] Found kubelet problem: Aug 19 12:21:59 old-k8s-version-091610 kubelet[673]: E0819 12:21:59.770059     673 pod_workers.go:191] Error syncing pod 4085e2df-7e89-44a8-b234-c4b001bdff1d ("metrics-server-9975d5f86-zb7nt_kube-system(4085e2df-7e89-44a8-b234-c4b001bdff1d)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0819 12:25:26.547890  501046 logs.go:138] Found kubelet problem: Aug 19 12:22:01 old-k8s-version-091610 kubelet[673]: E0819 12:22:01.769614     673 pod_workers.go:191] Error syncing pod 67d879d9-c2b3-4d91-8855-4f6007f01c6e ("dashboard-metrics-scraper-8d5bb5db8-kgs2g_kubernetes-dashboard(67d879d9-c2b3-4d91-8855-4f6007f01c6e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kgs2g_kubernetes-dashboard(67d879d9-c2b3-4d91-8855-4f6007f01c6e)"
	W0819 12:25:26.548076  501046 logs.go:138] Found kubelet problem: Aug 19 12:22:11 old-k8s-version-091610 kubelet[673]: E0819 12:22:11.770374     673 pod_workers.go:191] Error syncing pod 4085e2df-7e89-44a8-b234-c4b001bdff1d ("metrics-server-9975d5f86-zb7nt_kube-system(4085e2df-7e89-44a8-b234-c4b001bdff1d)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0819 12:25:26.548409  501046 logs.go:138] Found kubelet problem: Aug 19 12:22:12 old-k8s-version-091610 kubelet[673]: E0819 12:22:12.769816     673 pod_workers.go:191] Error syncing pod 67d879d9-c2b3-4d91-8855-4f6007f01c6e ("dashboard-metrics-scraper-8d5bb5db8-kgs2g_kubernetes-dashboard(67d879d9-c2b3-4d91-8855-4f6007f01c6e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kgs2g_kubernetes-dashboard(67d879d9-c2b3-4d91-8855-4f6007f01c6e)"
	W0819 12:25:26.548740  501046 logs.go:138] Found kubelet problem: Aug 19 12:22:23 old-k8s-version-091610 kubelet[673]: E0819 12:22:23.769634     673 pod_workers.go:191] Error syncing pod 67d879d9-c2b3-4d91-8855-4f6007f01c6e ("dashboard-metrics-scraper-8d5bb5db8-kgs2g_kubernetes-dashboard(67d879d9-c2b3-4d91-8855-4f6007f01c6e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kgs2g_kubernetes-dashboard(67d879d9-c2b3-4d91-8855-4f6007f01c6e)"
	W0819 12:25:26.548929  501046 logs.go:138] Found kubelet problem: Aug 19 12:22:25 old-k8s-version-091610 kubelet[673]: E0819 12:22:25.770010     673 pod_workers.go:191] Error syncing pod 4085e2df-7e89-44a8-b234-c4b001bdff1d ("metrics-server-9975d5f86-zb7nt_kube-system(4085e2df-7e89-44a8-b234-c4b001bdff1d)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0819 12:25:26.549115  501046 logs.go:138] Found kubelet problem: Aug 19 12:22:36 old-k8s-version-091610 kubelet[673]: E0819 12:22:36.771414     673 pod_workers.go:191] Error syncing pod 4085e2df-7e89-44a8-b234-c4b001bdff1d ("metrics-server-9975d5f86-zb7nt_kube-system(4085e2df-7e89-44a8-b234-c4b001bdff1d)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0819 12:25:26.549448  501046 logs.go:138] Found kubelet problem: Aug 19 12:22:38 old-k8s-version-091610 kubelet[673]: E0819 12:22:38.769996     673 pod_workers.go:191] Error syncing pod 67d879d9-c2b3-4d91-8855-4f6007f01c6e ("dashboard-metrics-scraper-8d5bb5db8-kgs2g_kubernetes-dashboard(67d879d9-c2b3-4d91-8855-4f6007f01c6e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kgs2g_kubernetes-dashboard(67d879d9-c2b3-4d91-8855-4f6007f01c6e)"
	W0819 12:25:26.551930  501046 logs.go:138] Found kubelet problem: Aug 19 12:22:48 old-k8s-version-091610 kubelet[673]: E0819 12:22:48.780860     673 pod_workers.go:191] Error syncing pod 4085e2df-7e89-44a8-b234-c4b001bdff1d ("metrics-server-9975d5f86-zb7nt_kube-system(4085e2df-7e89-44a8-b234-c4b001bdff1d)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W0819 12:25:26.552266  501046 logs.go:138] Found kubelet problem: Aug 19 12:22:52 old-k8s-version-091610 kubelet[673]: E0819 12:22:52.770145     673 pod_workers.go:191] Error syncing pod 67d879d9-c2b3-4d91-8855-4f6007f01c6e ("dashboard-metrics-scraper-8d5bb5db8-kgs2g_kubernetes-dashboard(67d879d9-c2b3-4d91-8855-4f6007f01c6e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kgs2g_kubernetes-dashboard(67d879d9-c2b3-4d91-8855-4f6007f01c6e)"
	W0819 12:25:26.552458  501046 logs.go:138] Found kubelet problem: Aug 19 12:23:00 old-k8s-version-091610 kubelet[673]: E0819 12:23:00.770567     673 pod_workers.go:191] Error syncing pod 4085e2df-7e89-44a8-b234-c4b001bdff1d ("metrics-server-9975d5f86-zb7nt_kube-system(4085e2df-7e89-44a8-b234-c4b001bdff1d)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0819 12:25:26.552792  501046 logs.go:138] Found kubelet problem: Aug 19 12:23:07 old-k8s-version-091610 kubelet[673]: E0819 12:23:07.769578     673 pod_workers.go:191] Error syncing pod 67d879d9-c2b3-4d91-8855-4f6007f01c6e ("dashboard-metrics-scraper-8d5bb5db8-kgs2g_kubernetes-dashboard(67d879d9-c2b3-4d91-8855-4f6007f01c6e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kgs2g_kubernetes-dashboard(67d879d9-c2b3-4d91-8855-4f6007f01c6e)"
	W0819 12:25:26.552978  501046 logs.go:138] Found kubelet problem: Aug 19 12:23:12 old-k8s-version-091610 kubelet[673]: E0819 12:23:12.772710     673 pod_workers.go:191] Error syncing pod 4085e2df-7e89-44a8-b234-c4b001bdff1d ("metrics-server-9975d5f86-zb7nt_kube-system(4085e2df-7e89-44a8-b234-c4b001bdff1d)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0819 12:25:26.553577  501046 logs.go:138] Found kubelet problem: Aug 19 12:23:22 old-k8s-version-091610 kubelet[673]: E0819 12:23:22.733720     673 pod_workers.go:191] Error syncing pod 67d879d9-c2b3-4d91-8855-4f6007f01c6e ("dashboard-metrics-scraper-8d5bb5db8-kgs2g_kubernetes-dashboard(67d879d9-c2b3-4d91-8855-4f6007f01c6e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kgs2g_kubernetes-dashboard(67d879d9-c2b3-4d91-8855-4f6007f01c6e)"
	W0819 12:25:26.553765  501046 logs.go:138] Found kubelet problem: Aug 19 12:23:26 old-k8s-version-091610 kubelet[673]: E0819 12:23:26.770120     673 pod_workers.go:191] Error syncing pod 4085e2df-7e89-44a8-b234-c4b001bdff1d ("metrics-server-9975d5f86-zb7nt_kube-system(4085e2df-7e89-44a8-b234-c4b001bdff1d)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0819 12:25:26.554099  501046 logs.go:138] Found kubelet problem: Aug 19 12:23:28 old-k8s-version-091610 kubelet[673]: E0819 12:23:28.503831     673 pod_workers.go:191] Error syncing pod 67d879d9-c2b3-4d91-8855-4f6007f01c6e ("dashboard-metrics-scraper-8d5bb5db8-kgs2g_kubernetes-dashboard(67d879d9-c2b3-4d91-8855-4f6007f01c6e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kgs2g_kubernetes-dashboard(67d879d9-c2b3-4d91-8855-4f6007f01c6e)"
	W0819 12:25:26.554285  501046 logs.go:138] Found kubelet problem: Aug 19 12:23:38 old-k8s-version-091610 kubelet[673]: E0819 12:23:38.770038     673 pod_workers.go:191] Error syncing pod 4085e2df-7e89-44a8-b234-c4b001bdff1d ("metrics-server-9975d5f86-zb7nt_kube-system(4085e2df-7e89-44a8-b234-c4b001bdff1d)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0819 12:25:26.554623  501046 logs.go:138] Found kubelet problem: Aug 19 12:23:42 old-k8s-version-091610 kubelet[673]: E0819 12:23:42.769697     673 pod_workers.go:191] Error syncing pod 67d879d9-c2b3-4d91-8855-4f6007f01c6e ("dashboard-metrics-scraper-8d5bb5db8-kgs2g_kubernetes-dashboard(67d879d9-c2b3-4d91-8855-4f6007f01c6e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kgs2g_kubernetes-dashboard(67d879d9-c2b3-4d91-8855-4f6007f01c6e)"
	W0819 12:25:26.554811  501046 logs.go:138] Found kubelet problem: Aug 19 12:23:50 old-k8s-version-091610 kubelet[673]: E0819 12:23:50.774063     673 pod_workers.go:191] Error syncing pod 4085e2df-7e89-44a8-b234-c4b001bdff1d ("metrics-server-9975d5f86-zb7nt_kube-system(4085e2df-7e89-44a8-b234-c4b001bdff1d)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0819 12:25:26.555146  501046 logs.go:138] Found kubelet problem: Aug 19 12:23:57 old-k8s-version-091610 kubelet[673]: E0819 12:23:57.769849     673 pod_workers.go:191] Error syncing pod 67d879d9-c2b3-4d91-8855-4f6007f01c6e ("dashboard-metrics-scraper-8d5bb5db8-kgs2g_kubernetes-dashboard(67d879d9-c2b3-4d91-8855-4f6007f01c6e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kgs2g_kubernetes-dashboard(67d879d9-c2b3-4d91-8855-4f6007f01c6e)"
	W0819 12:25:26.555333  501046 logs.go:138] Found kubelet problem: Aug 19 12:24:03 old-k8s-version-091610 kubelet[673]: E0819 12:24:03.769943     673 pod_workers.go:191] Error syncing pod 4085e2df-7e89-44a8-b234-c4b001bdff1d ("metrics-server-9975d5f86-zb7nt_kube-system(4085e2df-7e89-44a8-b234-c4b001bdff1d)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0819 12:25:26.555663  501046 logs.go:138] Found kubelet problem: Aug 19 12:24:12 old-k8s-version-091610 kubelet[673]: E0819 12:24:12.770074     673 pod_workers.go:191] Error syncing pod 67d879d9-c2b3-4d91-8855-4f6007f01c6e ("dashboard-metrics-scraper-8d5bb5db8-kgs2g_kubernetes-dashboard(67d879d9-c2b3-4d91-8855-4f6007f01c6e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kgs2g_kubernetes-dashboard(67d879d9-c2b3-4d91-8855-4f6007f01c6e)"
	W0819 12:25:26.555850  501046 logs.go:138] Found kubelet problem: Aug 19 12:24:18 old-k8s-version-091610 kubelet[673]: E0819 12:24:18.770049     673 pod_workers.go:191] Error syncing pod 4085e2df-7e89-44a8-b234-c4b001bdff1d ("metrics-server-9975d5f86-zb7nt_kube-system(4085e2df-7e89-44a8-b234-c4b001bdff1d)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0819 12:25:26.556178  501046 logs.go:138] Found kubelet problem: Aug 19 12:24:27 old-k8s-version-091610 kubelet[673]: E0819 12:24:27.769551     673 pod_workers.go:191] Error syncing pod 67d879d9-c2b3-4d91-8855-4f6007f01c6e ("dashboard-metrics-scraper-8d5bb5db8-kgs2g_kubernetes-dashboard(67d879d9-c2b3-4d91-8855-4f6007f01c6e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kgs2g_kubernetes-dashboard(67d879d9-c2b3-4d91-8855-4f6007f01c6e)"
	W0819 12:25:26.556364  501046 logs.go:138] Found kubelet problem: Aug 19 12:24:32 old-k8s-version-091610 kubelet[673]: E0819 12:24:32.769991     673 pod_workers.go:191] Error syncing pod 4085e2df-7e89-44a8-b234-c4b001bdff1d ("metrics-server-9975d5f86-zb7nt_kube-system(4085e2df-7e89-44a8-b234-c4b001bdff1d)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0819 12:25:26.556695  501046 logs.go:138] Found kubelet problem: Aug 19 12:24:39 old-k8s-version-091610 kubelet[673]: E0819 12:24:39.769647     673 pod_workers.go:191] Error syncing pod 67d879d9-c2b3-4d91-8855-4f6007f01c6e ("dashboard-metrics-scraper-8d5bb5db8-kgs2g_kubernetes-dashboard(67d879d9-c2b3-4d91-8855-4f6007f01c6e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kgs2g_kubernetes-dashboard(67d879d9-c2b3-4d91-8855-4f6007f01c6e)"
	W0819 12:25:26.556880  501046 logs.go:138] Found kubelet problem: Aug 19 12:24:47 old-k8s-version-091610 kubelet[673]: E0819 12:24:47.769929     673 pod_workers.go:191] Error syncing pod 4085e2df-7e89-44a8-b234-c4b001bdff1d ("metrics-server-9975d5f86-zb7nt_kube-system(4085e2df-7e89-44a8-b234-c4b001bdff1d)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0819 12:25:26.557210  501046 logs.go:138] Found kubelet problem: Aug 19 12:24:52 old-k8s-version-091610 kubelet[673]: E0819 12:24:52.769598     673 pod_workers.go:191] Error syncing pod 67d879d9-c2b3-4d91-8855-4f6007f01c6e ("dashboard-metrics-scraper-8d5bb5db8-kgs2g_kubernetes-dashboard(67d879d9-c2b3-4d91-8855-4f6007f01c6e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kgs2g_kubernetes-dashboard(67d879d9-c2b3-4d91-8855-4f6007f01c6e)"
	W0819 12:25:26.557395  501046 logs.go:138] Found kubelet problem: Aug 19 12:25:02 old-k8s-version-091610 kubelet[673]: E0819 12:25:02.770111     673 pod_workers.go:191] Error syncing pod 4085e2df-7e89-44a8-b234-c4b001bdff1d ("metrics-server-9975d5f86-zb7nt_kube-system(4085e2df-7e89-44a8-b234-c4b001bdff1d)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0819 12:25:26.557725  501046 logs.go:138] Found kubelet problem: Aug 19 12:25:07 old-k8s-version-091610 kubelet[673]: E0819 12:25:07.769696     673 pod_workers.go:191] Error syncing pod 67d879d9-c2b3-4d91-8855-4f6007f01c6e ("dashboard-metrics-scraper-8d5bb5db8-kgs2g_kubernetes-dashboard(67d879d9-c2b3-4d91-8855-4f6007f01c6e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kgs2g_kubernetes-dashboard(67d879d9-c2b3-4d91-8855-4f6007f01c6e)"
	W0819 12:25:26.557911  501046 logs.go:138] Found kubelet problem: Aug 19 12:25:16 old-k8s-version-091610 kubelet[673]: E0819 12:25:16.770077     673 pod_workers.go:191] Error syncing pod 4085e2df-7e89-44a8-b234-c4b001bdff1d ("metrics-server-9975d5f86-zb7nt_kube-system(4085e2df-7e89-44a8-b234-c4b001bdff1d)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0819 12:25:26.558246  501046 logs.go:138] Found kubelet problem: Aug 19 12:25:21 old-k8s-version-091610 kubelet[673]: E0819 12:25:21.769596     673 pod_workers.go:191] Error syncing pod 67d879d9-c2b3-4d91-8855-4f6007f01c6e ("dashboard-metrics-scraper-8d5bb5db8-kgs2g_kubernetes-dashboard(67d879d9-c2b3-4d91-8855-4f6007f01c6e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kgs2g_kubernetes-dashboard(67d879d9-c2b3-4d91-8855-4f6007f01c6e)"
	I0819 12:25:26.558257  501046 logs.go:123] Gathering logs for etcd [f96bb26d0d9fd54167ea4eedbda32851e7e6ed986c5d18edddbfb9d015c80aa8] ...
	I0819 12:25:26.558273  501046 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f96bb26d0d9fd54167ea4eedbda32851e7e6ed986c5d18edddbfb9d015c80aa8"
	I0819 12:25:26.610680  501046 logs.go:123] Gathering logs for kube-proxy [8581310ffb6da62d730c00416e9d418c1fd194d0459e551c98e90cd0193dc9be] ...
	I0819 12:25:26.610707  501046 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8581310ffb6da62d730c00416e9d418c1fd194d0459e551c98e90cd0193dc9be"
	I0819 12:25:26.659447  501046 logs.go:123] Gathering logs for kube-proxy [495863fa417577517a6659a9363e132d473fde25375c55ba292884732c5b5cc6] ...
	I0819 12:25:26.659473  501046 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 495863fa417577517a6659a9363e132d473fde25375c55ba292884732c5b5cc6"
	I0819 12:25:26.700658  501046 logs.go:123] Gathering logs for storage-provisioner [d6ba97b27a6fcb89c2d05f135c98e06be837786617de7736914f6711ab33c929] ...
	I0819 12:25:26.700685  501046 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d6ba97b27a6fcb89c2d05f135c98e06be837786617de7736914f6711ab33c929"
	I0819 12:25:26.743391  501046 logs.go:123] Gathering logs for storage-provisioner [7a63c07299e71c1920b148c4c2cd68ce0fc64d5359eb08ebc374e073275d266b] ...
	I0819 12:25:26.743418  501046 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7a63c07299e71c1920b148c4c2cd68ce0fc64d5359eb08ebc374e073275d266b"
	I0819 12:25:26.791267  501046 logs.go:123] Gathering logs for containerd ...
	I0819 12:25:26.791299  501046 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0819 12:25:26.849932  501046 logs.go:123] Gathering logs for container status ...
	I0819 12:25:26.849967  501046 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 12:25:26.906316  501046 out.go:358] Setting ErrFile to fd 2...
	I0819 12:25:26.906346  501046 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0819 12:25:26.906392  501046 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0819 12:25:26.906427  501046 out.go:270]   Aug 19 12:24:52 old-k8s-version-091610 kubelet[673]: E0819 12:24:52.769598     673 pod_workers.go:191] Error syncing pod 67d879d9-c2b3-4d91-8855-4f6007f01c6e ("dashboard-metrics-scraper-8d5bb5db8-kgs2g_kubernetes-dashboard(67d879d9-c2b3-4d91-8855-4f6007f01c6e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kgs2g_kubernetes-dashboard(67d879d9-c2b3-4d91-8855-4f6007f01c6e)"
	  Aug 19 12:24:52 old-k8s-version-091610 kubelet[673]: E0819 12:24:52.769598     673 pod_workers.go:191] Error syncing pod 67d879d9-c2b3-4d91-8855-4f6007f01c6e ("dashboard-metrics-scraper-8d5bb5db8-kgs2g_kubernetes-dashboard(67d879d9-c2b3-4d91-8855-4f6007f01c6e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kgs2g_kubernetes-dashboard(67d879d9-c2b3-4d91-8855-4f6007f01c6e)"
	W0819 12:25:26.906433  501046 out.go:270]   Aug 19 12:25:02 old-k8s-version-091610 kubelet[673]: E0819 12:25:02.770111     673 pod_workers.go:191] Error syncing pod 4085e2df-7e89-44a8-b234-c4b001bdff1d ("metrics-server-9975d5f86-zb7nt_kube-system(4085e2df-7e89-44a8-b234-c4b001bdff1d)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	  Aug 19 12:25:02 old-k8s-version-091610 kubelet[673]: E0819 12:25:02.770111     673 pod_workers.go:191] Error syncing pod 4085e2df-7e89-44a8-b234-c4b001bdff1d ("metrics-server-9975d5f86-zb7nt_kube-system(4085e2df-7e89-44a8-b234-c4b001bdff1d)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0819 12:25:26.906450  501046 out.go:270]   Aug 19 12:25:07 old-k8s-version-091610 kubelet[673]: E0819 12:25:07.769696     673 pod_workers.go:191] Error syncing pod 67d879d9-c2b3-4d91-8855-4f6007f01c6e ("dashboard-metrics-scraper-8d5bb5db8-kgs2g_kubernetes-dashboard(67d879d9-c2b3-4d91-8855-4f6007f01c6e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kgs2g_kubernetes-dashboard(67d879d9-c2b3-4d91-8855-4f6007f01c6e)"
	  Aug 19 12:25:07 old-k8s-version-091610 kubelet[673]: E0819 12:25:07.769696     673 pod_workers.go:191] Error syncing pod 67d879d9-c2b3-4d91-8855-4f6007f01c6e ("dashboard-metrics-scraper-8d5bb5db8-kgs2g_kubernetes-dashboard(67d879d9-c2b3-4d91-8855-4f6007f01c6e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kgs2g_kubernetes-dashboard(67d879d9-c2b3-4d91-8855-4f6007f01c6e)"
	W0819 12:25:26.906457  501046 out.go:270]   Aug 19 12:25:16 old-k8s-version-091610 kubelet[673]: E0819 12:25:16.770077     673 pod_workers.go:191] Error syncing pod 4085e2df-7e89-44a8-b234-c4b001bdff1d ("metrics-server-9975d5f86-zb7nt_kube-system(4085e2df-7e89-44a8-b234-c4b001bdff1d)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	  Aug 19 12:25:16 old-k8s-version-091610 kubelet[673]: E0819 12:25:16.770077     673 pod_workers.go:191] Error syncing pod 4085e2df-7e89-44a8-b234-c4b001bdff1d ("metrics-server-9975d5f86-zb7nt_kube-system(4085e2df-7e89-44a8-b234-c4b001bdff1d)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0819 12:25:26.906462  501046 out.go:270]   Aug 19 12:25:21 old-k8s-version-091610 kubelet[673]: E0819 12:25:21.769596     673 pod_workers.go:191] Error syncing pod 67d879d9-c2b3-4d91-8855-4f6007f01c6e ("dashboard-metrics-scraper-8d5bb5db8-kgs2g_kubernetes-dashboard(67d879d9-c2b3-4d91-8855-4f6007f01c6e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kgs2g_kubernetes-dashboard(67d879d9-c2b3-4d91-8855-4f6007f01c6e)"
	  Aug 19 12:25:21 old-k8s-version-091610 kubelet[673]: E0819 12:25:21.769596     673 pod_workers.go:191] Error syncing pod 67d879d9-c2b3-4d91-8855-4f6007f01c6e ("dashboard-metrics-scraper-8d5bb5db8-kgs2g_kubernetes-dashboard(67d879d9-c2b3-4d91-8855-4f6007f01c6e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kgs2g_kubernetes-dashboard(67d879d9-c2b3-4d91-8855-4f6007f01c6e)"
	I0819 12:25:26.906474  501046 out.go:358] Setting ErrFile to fd 2...
	I0819 12:25:26.906482  501046 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 12:25:36.908448  501046 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I0819 12:25:36.926940  501046 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I0819 12:25:36.928948  501046 out.go:201] 
	W0819 12:25:36.930847  501046 out.go:270] X Exiting due to K8S_UNHEALTHY_CONTROL_PLANE: wait 6m0s for node: wait for healthy API server: controlPlane never updated to v1.20.0
	X Exiting due to K8S_UNHEALTHY_CONTROL_PLANE: wait 6m0s for node: wait for healthy API server: controlPlane never updated to v1.20.0
	W0819 12:25:36.930990  501046 out.go:270] * Suggestion: Control Plane could not update, try minikube delete --all --purge
	* Suggestion: Control Plane could not update, try minikube delete --all --purge
	W0819 12:25:36.931063  501046 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/11417
	* Related issue: https://github.com/kubernetes/minikube/issues/11417
	W0819 12:25:36.931093  501046 out.go:270] * 
	* 
	W0819 12:25:36.932292  501046 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0819 12:25:36.935172  501046 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-linux-arm64 start -p old-k8s-version-091610 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.20.0": exit status 102
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-091610
helpers_test.go:235: (dbg) docker inspect old-k8s-version-091610:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "24575c61d664e6cf97d468faa9e7c0eae4a670fc5a3b94b559429b65bb0c0e54",
	        "Created": "2024-08-19T12:16:42.269353383Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 501287,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-08-19T12:19:25.6405995Z",
	            "FinishedAt": "2024-08-19T12:19:23.964583184Z"
	        },
	        "Image": "sha256:decdd59746a9dba10062a73f6cd4b910c7b4e60613660b1022f8357747681c4d",
	        "ResolvConfPath": "/var/lib/docker/containers/24575c61d664e6cf97d468faa9e7c0eae4a670fc5a3b94b559429b65bb0c0e54/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/24575c61d664e6cf97d468faa9e7c0eae4a670fc5a3b94b559429b65bb0c0e54/hostname",
	        "HostsPath": "/var/lib/docker/containers/24575c61d664e6cf97d468faa9e7c0eae4a670fc5a3b94b559429b65bb0c0e54/hosts",
	        "LogPath": "/var/lib/docker/containers/24575c61d664e6cf97d468faa9e7c0eae4a670fc5a3b94b559429b65bb0c0e54/24575c61d664e6cf97d468faa9e7c0eae4a670fc5a3b94b559429b65bb0c0e54-json.log",
	        "Name": "/old-k8s-version-091610",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-091610:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-091610",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/d9dcfcb41dcf8b1b62e4fac5582e273c1e4cd9c41cf1a75f339fe74c60d79037-init/diff:/var/lib/docker/overlay2/ec0afb666e8237335e438a7adc5cdc83345e3266b08ae54bf0b7ce8a2781370a/diff",
	                "MergedDir": "/var/lib/docker/overlay2/d9dcfcb41dcf8b1b62e4fac5582e273c1e4cd9c41cf1a75f339fe74c60d79037/merged",
	                "UpperDir": "/var/lib/docker/overlay2/d9dcfcb41dcf8b1b62e4fac5582e273c1e4cd9c41cf1a75f339fe74c60d79037/diff",
	                "WorkDir": "/var/lib/docker/overlay2/d9dcfcb41dcf8b1b62e4fac5582e273c1e4cd9c41cf1a75f339fe74c60d79037/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-091610",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-091610/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-091610",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-091610",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-091610",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "682529bd4e21d4f725a47ce1116ec5f1f2dc4818373fce5933deaef305a90481",
	            "SandboxKey": "/var/run/docker/netns/682529bd4e21",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33433"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33434"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33437"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33435"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33436"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-091610": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:55:02",
	                    "DriverOpts": null,
	                    "NetworkID": "176f4bad435599ae7416aad5afc0b7e5443f14f72632a64b92a23d2b6ca232d9",
	                    "EndpointID": "12efbc4292f7a7ef14883930da3a1e361f3de6df3cc78d16ffd5eac06c0bc1d6",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-091610",
	                        "24575c61d664"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-091610 -n old-k8s-version-091610
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-091610 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p old-k8s-version-091610 logs -n 25: (2.06523977s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/SecondStart logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|--------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |         Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|--------------------------|---------|---------|---------------------|---------------------|
	| start   | -p force-systemd-env-759874                            | force-systemd-env-759874 | jenkins | v1.33.1 | 19 Aug 24 12:15 UTC | 19 Aug 24 12:15 UTC |
	|         | --memory=2048                                          |                          |         |         |                     |                     |
	|         | --alsologtostderr                                      |                          |         |         |                     |                     |
	|         | -v=5 --driver=docker                                   |                          |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                          |         |         |                     |                     |
	| pause   | -p pause-847048                                        | pause-847048             | jenkins | v1.33.1 | 19 Aug 24 12:15 UTC | 19 Aug 24 12:15 UTC |
	|         | --alsologtostderr -v=5                                 |                          |         |         |                     |                     |
	| unpause | -p pause-847048                                        | pause-847048             | jenkins | v1.33.1 | 19 Aug 24 12:15 UTC | 19 Aug 24 12:15 UTC |
	|         | --alsologtostderr -v=5                                 |                          |         |         |                     |                     |
	| pause   | -p pause-847048                                        | pause-847048             | jenkins | v1.33.1 | 19 Aug 24 12:15 UTC | 19 Aug 24 12:15 UTC |
	|         | --alsologtostderr -v=5                                 |                          |         |         |                     |                     |
	| delete  | -p pause-847048                                        | pause-847048             | jenkins | v1.33.1 | 19 Aug 24 12:15 UTC | 19 Aug 24 12:15 UTC |
	|         | --alsologtostderr -v=5                                 |                          |         |         |                     |                     |
	| delete  | -p pause-847048                                        | pause-847048             | jenkins | v1.33.1 | 19 Aug 24 12:15 UTC | 19 Aug 24 12:15 UTC |
	| start   | -p cert-expiration-553371                              | cert-expiration-553371   | jenkins | v1.33.1 | 19 Aug 24 12:15 UTC | 19 Aug 24 12:16 UTC |
	|         | --memory=2048                                          |                          |         |         |                     |                     |
	|         | --cert-expiration=3m                                   |                          |         |         |                     |                     |
	|         | --driver=docker                                        |                          |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                          |         |         |                     |                     |
	| ssh     | force-systemd-env-759874                               | force-systemd-env-759874 | jenkins | v1.33.1 | 19 Aug 24 12:15 UTC | 19 Aug 24 12:15 UTC |
	|         | ssh cat                                                |                          |         |         |                     |                     |
	|         | /etc/containerd/config.toml                            |                          |         |         |                     |                     |
	| delete  | -p force-systemd-env-759874                            | force-systemd-env-759874 | jenkins | v1.33.1 | 19 Aug 24 12:15 UTC | 19 Aug 24 12:15 UTC |
	| start   | -p cert-options-058229                                 | cert-options-058229      | jenkins | v1.33.1 | 19 Aug 24 12:15 UTC | 19 Aug 24 12:16 UTC |
	|         | --memory=2048                                          |                          |         |         |                     |                     |
	|         | --apiserver-ips=127.0.0.1                              |                          |         |         |                     |                     |
	|         | --apiserver-ips=192.168.15.15                          |                          |         |         |                     |                     |
	|         | --apiserver-names=localhost                            |                          |         |         |                     |                     |
	|         | --apiserver-names=www.google.com                       |                          |         |         |                     |                     |
	|         | --apiserver-port=8555                                  |                          |         |         |                     |                     |
	|         | --driver=docker                                        |                          |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                          |         |         |                     |                     |
	| ssh     | cert-options-058229 ssh                                | cert-options-058229      | jenkins | v1.33.1 | 19 Aug 24 12:16 UTC | 19 Aug 24 12:16 UTC |
	|         | openssl x509 -text -noout -in                          |                          |         |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt                  |                          |         |         |                     |                     |
	| ssh     | -p cert-options-058229 -- sudo                         | cert-options-058229      | jenkins | v1.33.1 | 19 Aug 24 12:16 UTC | 19 Aug 24 12:16 UTC |
	|         | cat /etc/kubernetes/admin.conf                         |                          |         |         |                     |                     |
	| delete  | -p cert-options-058229                                 | cert-options-058229      | jenkins | v1.33.1 | 19 Aug 24 12:16 UTC | 19 Aug 24 12:16 UTC |
	| start   | -p old-k8s-version-091610                              | old-k8s-version-091610   | jenkins | v1.33.1 | 19 Aug 24 12:16 UTC | 19 Aug 24 12:19 UTC |
	|         | --memory=2200                                          |                          |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                          |         |         |                     |                     |
	|         | --kvm-network=default                                  |                          |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                          |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                          |         |         |                     |                     |
	|         | --keep-context=false                                   |                          |         |         |                     |                     |
	|         | --driver=docker                                        |                          |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                          |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                          |         |         |                     |                     |
	| start   | -p cert-expiration-553371                              | cert-expiration-553371   | jenkins | v1.33.1 | 19 Aug 24 12:19 UTC | 19 Aug 24 12:19 UTC |
	|         | --memory=2048                                          |                          |         |         |                     |                     |
	|         | --cert-expiration=8760h                                |                          |         |         |                     |                     |
	|         | --driver=docker                                        |                          |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                          |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-091610        | old-k8s-version-091610   | jenkins | v1.33.1 | 19 Aug 24 12:19 UTC | 19 Aug 24 12:19 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                          |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                          |         |         |                     |                     |
	| stop    | -p old-k8s-version-091610                              | old-k8s-version-091610   | jenkins | v1.33.1 | 19 Aug 24 12:19 UTC | 19 Aug 24 12:19 UTC |
	|         | --alsologtostderr -v=3                                 |                          |         |         |                     |                     |
	| delete  | -p cert-expiration-553371                              | cert-expiration-553371   | jenkins | v1.33.1 | 19 Aug 24 12:19 UTC | 19 Aug 24 12:19 UTC |
	| start   | -p no-preload-069465                                   | no-preload-069465        | jenkins | v1.33.1 | 19 Aug 24 12:19 UTC | 19 Aug 24 12:20 UTC |
	|         | --memory=2200                                          |                          |         |         |                     |                     |
	|         | --alsologtostderr                                      |                          |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                          |         |         |                     |                     |
	|         | --driver=docker                                        |                          |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                          |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                          |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-091610             | old-k8s-version-091610   | jenkins | v1.33.1 | 19 Aug 24 12:19 UTC | 19 Aug 24 12:19 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                          |         |         |                     |                     |
	| start   | -p old-k8s-version-091610                              | old-k8s-version-091610   | jenkins | v1.33.1 | 19 Aug 24 12:19 UTC |                     |
	|         | --memory=2200                                          |                          |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                          |         |         |                     |                     |
	|         | --kvm-network=default                                  |                          |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                          |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                          |         |         |                     |                     |
	|         | --keep-context=false                                   |                          |         |         |                     |                     |
	|         | --driver=docker                                        |                          |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                          |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                          |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-069465             | no-preload-069465        | jenkins | v1.33.1 | 19 Aug 24 12:21 UTC | 19 Aug 24 12:21 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                          |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                          |         |         |                     |                     |
	| stop    | -p no-preload-069465                                   | no-preload-069465        | jenkins | v1.33.1 | 19 Aug 24 12:21 UTC | 19 Aug 24 12:21 UTC |
	|         | --alsologtostderr -v=3                                 |                          |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-069465                  | no-preload-069465        | jenkins | v1.33.1 | 19 Aug 24 12:21 UTC | 19 Aug 24 12:21 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                          |         |         |                     |                     |
	| start   | -p no-preload-069465                                   | no-preload-069465        | jenkins | v1.33.1 | 19 Aug 24 12:21 UTC |                     |
	|         | --memory=2200                                          |                          |         |         |                     |                     |
	|         | --alsologtostderr                                      |                          |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                          |         |         |                     |                     |
	|         | --driver=docker                                        |                          |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                          |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                          |         |         |                     |                     |
	|---------|--------------------------------------------------------|--------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/19 12:21:14
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.22.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0819 12:21:14.145685  506775 out.go:345] Setting OutFile to fd 1 ...
	I0819 12:21:14.145874  506775 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 12:21:14.145887  506775 out.go:358] Setting ErrFile to fd 2...
	I0819 12:21:14.145893  506775 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 12:21:14.146180  506775 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19476-293809/.minikube/bin
	I0819 12:21:14.146677  506775 out.go:352] Setting JSON to false
	I0819 12:21:14.148154  506775 start.go:129] hostinfo: {"hostname":"ip-172-31-30-239","uptime":7422,"bootTime":1724062653,"procs":238,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I0819 12:21:14.148234  506775 start.go:139] virtualization:  
	I0819 12:21:14.151726  506775 out.go:177] * [no-preload-069465] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	I0819 12:21:14.153827  506775 out.go:177]   - MINIKUBE_LOCATION=19476
	I0819 12:21:14.153941  506775 notify.go:220] Checking for updates...
	I0819 12:21:14.157670  506775 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0819 12:21:14.159420  506775 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19476-293809/kubeconfig
	I0819 12:21:14.161403  506775 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19476-293809/.minikube
	I0819 12:21:14.163294  506775 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0819 12:21:14.165031  506775 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0819 12:21:14.167598  506775 config.go:182] Loaded profile config "no-preload-069465": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.0
	I0819 12:21:14.168158  506775 driver.go:392] Setting default libvirt URI to qemu:///system
	I0819 12:21:14.189812  506775 docker.go:123] docker version: linux-27.1.2:Docker Engine - Community
	I0819 12:21:14.189928  506775 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0819 12:21:14.257381  506775 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:41 OomKillDisable:true NGoroutines:53 SystemTime:2024-08-19 12:21:14.24739193 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1067-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aarc
h64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214900736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:27.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8fc6bcff51318944179630522a095cc9dbf9f353 Expected:8fc6bcff51318944179630522a095cc9dbf9f353} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerError
s:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1]] Warnings:<nil>}}
	I0819 12:21:14.257503  506775 docker.go:307] overlay module found
	I0819 12:21:14.260601  506775 out.go:177] * Using the docker driver based on existing profile
	I0819 12:21:14.262806  506775 start.go:297] selected driver: docker
	I0819 12:21:14.262825  506775 start.go:901] validating driver "docker" against &{Name:no-preload-069465 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:no-preload-069465 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false Mount
String:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 12:21:14.262968  506775 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0819 12:21:14.263598  506775 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0819 12:21:14.338478  506775 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:41 OomKillDisable:true NGoroutines:53 SystemTime:2024-08-19 12:21:14.327843473 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1067-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214900736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:27.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8fc6bcff51318944179630522a095cc9dbf9f353 Expected:8fc6bcff51318944179630522a095cc9dbf9f353} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1]] Warnings:<nil>}}
	I0819 12:21:14.338816  506775 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0819 12:21:14.338837  506775 cni.go:84] Creating CNI manager for ""
	I0819 12:21:14.338856  506775 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0819 12:21:14.338935  506775 start.go:340] cluster config:
	{Name:no-preload-069465 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:no-preload-069465 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Moun
tIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 12:21:14.342139  506775 out.go:177] * Starting "no-preload-069465" primary control-plane node in "no-preload-069465" cluster
	I0819 12:21:14.343614  506775 cache.go:121] Beginning downloading kic base image for docker with containerd
	I0819 12:21:14.345314  506775 out.go:177] * Pulling base image v0.0.44-1723740748-19452 ...
	I0819 12:21:14.347077  506775 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime containerd
	I0819 12:21:14.347229  506775 profile.go:143] Saving config to /home/jenkins/minikube-integration/19476-293809/.minikube/profiles/no-preload-069465/config.json ...
	I0819 12:21:14.347558  506775 cache.go:107] acquiring lock: {Name:mk71b4b3a76fa56c5e588050fc0333065448399a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 12:21:14.347631  506775 cache.go:115] /home/jenkins/minikube-integration/19476-293809/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0819 12:21:14.347640  506775 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/19476-293809/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 89.155µs
	I0819 12:21:14.347648  506775 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/19476-293809/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0819 12:21:14.347658  506775 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d in local docker daemon
	I0819 12:21:14.347921  506775 cache.go:107] acquiring lock: {Name:mk4f5c3ea2278cd2581e6c1fe18e65b0d71f7f87 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 12:21:14.347991  506775 cache.go:115] /home/jenkins/minikube-integration/19476-293809/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.0 exists
	I0819 12:21:14.348000  506775 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.31.0" -> "/home/jenkins/minikube-integration/19476-293809/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.0" took 85.823µs
	I0819 12:21:14.348007  506775 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.31.0 -> /home/jenkins/minikube-integration/19476-293809/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.0 succeeded
	I0819 12:21:14.348018  506775 cache.go:107] acquiring lock: {Name:mk3dd105983b4d976eb6c46b923a0f3cc83fd513 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 12:21:14.348053  506775 cache.go:115] /home/jenkins/minikube-integration/19476-293809/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.0 exists
	I0819 12:21:14.348059  506775 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.31.0" -> "/home/jenkins/minikube-integration/19476-293809/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.0" took 42.42µs
	I0819 12:21:14.348065  506775 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.31.0 -> /home/jenkins/minikube-integration/19476-293809/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.0 succeeded
	I0819 12:21:14.348074  506775 cache.go:107] acquiring lock: {Name:mk27d900409e5a5c3c260d651571a04bfa7b025b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 12:21:14.348100  506775 cache.go:115] /home/jenkins/minikube-integration/19476-293809/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.0 exists
	I0819 12:21:14.348104  506775 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.31.0" -> "/home/jenkins/minikube-integration/19476-293809/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.0" took 32.163µs
	I0819 12:21:14.348110  506775 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.31.0 -> /home/jenkins/minikube-integration/19476-293809/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.0 succeeded
	I0819 12:21:14.348125  506775 cache.go:107] acquiring lock: {Name:mk750e7a561a0fd7bffa6e3a21edb1ace03bccdf Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 12:21:14.348166  506775 cache.go:115] /home/jenkins/minikube-integration/19476-293809/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.0 exists
	I0819 12:21:14.348172  506775 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.31.0" -> "/home/jenkins/minikube-integration/19476-293809/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.0" took 54.374µs
	I0819 12:21:14.348182  506775 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.31.0 -> /home/jenkins/minikube-integration/19476-293809/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.0 succeeded
	I0819 12:21:14.348191  506775 cache.go:107] acquiring lock: {Name:mk39f67fa0822719f12d33e5129dc444a013204a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 12:21:14.348224  506775 cache.go:115] /home/jenkins/minikube-integration/19476-293809/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0 exists
	I0819 12:21:14.348229  506775 cache.go:96] cache image "registry.k8s.io/etcd:3.5.15-0" -> "/home/jenkins/minikube-integration/19476-293809/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0" took 39.072µs
	I0819 12:21:14.348234  506775 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.15-0 -> /home/jenkins/minikube-integration/19476-293809/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0 succeeded
	I0819 12:21:14.348243  506775 cache.go:107] acquiring lock: {Name:mk076c86aa5f9ce1565b9cd6ce125370c95c5660 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 12:21:14.348270  506775 cache.go:115] /home/jenkins/minikube-integration/19476-293809/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10 exists
	I0819 12:21:14.348275  506775 cache.go:96] cache image "registry.k8s.io/pause:3.10" -> "/home/jenkins/minikube-integration/19476-293809/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10" took 33.345µs
	I0819 12:21:14.348281  506775 cache.go:80] save to tar file registry.k8s.io/pause:3.10 -> /home/jenkins/minikube-integration/19476-293809/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10 succeeded
	I0819 12:21:14.348289  506775 cache.go:107] acquiring lock: {Name:mk752ccc7d68d1a7e4bb60761f25b2624cc89e26 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 12:21:14.348313  506775 cache.go:115] /home/jenkins/minikube-integration/19476-293809/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1 exists
	I0819 12:21:14.348318  506775 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.11.1" -> "/home/jenkins/minikube-integration/19476-293809/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1" took 29.874µs
	I0819 12:21:14.348323  506775 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.11.1 -> /home/jenkins/minikube-integration/19476-293809/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1 succeeded
	I0819 12:21:14.348329  506775 cache.go:87] Successfully saved all images to host disk.
	W0819 12:21:14.369120  506775 image.go:95] image gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d is of wrong architecture
	I0819 12:21:14.369139  506775 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d to local cache
	I0819 12:21:14.369210  506775 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d in local cache directory
	I0819 12:21:14.369227  506775 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d in local cache directory, skipping pull
	I0819 12:21:14.369232  506775 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d exists in cache, skipping pull
	I0819 12:21:14.369239  506775 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d as a tarball
	I0819 12:21:14.369245  506775 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d from local cache
	I0819 12:21:14.500539  506775 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d from cached tarball
	I0819 12:21:14.500602  506775 cache.go:194] Successfully downloaded all kic artifacts
	I0819 12:21:14.500646  506775 start.go:360] acquireMachinesLock for no-preload-069465: {Name:mka9102ffd931668f28e72461bfbaa817bb75b86 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 12:21:14.500723  506775 start.go:364] duration metric: took 46.948µs to acquireMachinesLock for "no-preload-069465"
	I0819 12:21:14.500748  506775 start.go:96] Skipping create...Using existing machine configuration
	I0819 12:21:14.500759  506775 fix.go:54] fixHost starting: 
	I0819 12:21:14.501048  506775 cli_runner.go:164] Run: docker container inspect no-preload-069465 --format={{.State.Status}}
	I0819 12:21:14.519370  506775 fix.go:112] recreateIfNeeded on no-preload-069465: state=Stopped err=<nil>
	W0819 12:21:14.519403  506775 fix.go:138] unexpected machine state, will restart: <nil>
	I0819 12:21:14.522678  506775 out.go:177] * Restarting existing docker container for "no-preload-069465" ...
	I0819 12:21:10.217804  501046 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-091610" in "kube-system" namespace has status "Ready":"False"
	I0819 12:21:12.732164  501046 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-091610" in "kube-system" namespace has status "Ready":"False"
	I0819 12:21:13.215594  501046 pod_ready.go:93] pod "kube-controller-manager-old-k8s-version-091610" in "kube-system" namespace has status "Ready":"True"
	I0819 12:21:13.215623  501046 pod_ready.go:82] duration metric: took 1m24.006129708s for pod "kube-controller-manager-old-k8s-version-091610" in "kube-system" namespace to be "Ready" ...
	I0819 12:21:13.215635  501046 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-g2lvm" in "kube-system" namespace to be "Ready" ...
	I0819 12:21:13.220885  501046 pod_ready.go:93] pod "kube-proxy-g2lvm" in "kube-system" namespace has status "Ready":"True"
	I0819 12:21:13.220915  501046 pod_ready.go:82] duration metric: took 5.271948ms for pod "kube-proxy-g2lvm" in "kube-system" namespace to be "Ready" ...
	I0819 12:21:13.220927  501046 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-old-k8s-version-091610" in "kube-system" namespace to be "Ready" ...
	I0819 12:21:13.225808  501046 pod_ready.go:93] pod "kube-scheduler-old-k8s-version-091610" in "kube-system" namespace has status "Ready":"True"
	I0819 12:21:13.225834  501046 pod_ready.go:82] duration metric: took 4.899271ms for pod "kube-scheduler-old-k8s-version-091610" in "kube-system" namespace to be "Ready" ...
	I0819 12:21:13.225846  501046 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-9975d5f86-zb7nt" in "kube-system" namespace to be "Ready" ...
	I0819 12:21:14.524486  506775 cli_runner.go:164] Run: docker start no-preload-069465
	I0819 12:21:14.891989  506775 cli_runner.go:164] Run: docker container inspect no-preload-069465 --format={{.State.Status}}
	I0819 12:21:14.916155  506775 kic.go:430] container "no-preload-069465" state is running.
	I0819 12:21:14.918011  506775 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-069465
	I0819 12:21:14.947581  506775 profile.go:143] Saving config to /home/jenkins/minikube-integration/19476-293809/.minikube/profiles/no-preload-069465/config.json ...
	I0819 12:21:14.947903  506775 machine.go:93] provisionDockerMachine start ...
	I0819 12:21:14.947968  506775 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-069465
	I0819 12:21:14.971109  506775 main.go:141] libmachine: Using SSH client type: native
	I0819 12:21:14.971450  506775 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e49d0] 0x3e7230 <nil>  [] 0s} 127.0.0.1 33438 <nil> <nil>}
	I0819 12:21:14.971465  506775 main.go:141] libmachine: About to run SSH command:
	hostname
	I0819 12:21:14.972407  506775 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I0819 12:21:18.106985  506775 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-069465
	
	I0819 12:21:18.107011  506775 ubuntu.go:169] provisioning hostname "no-preload-069465"
	I0819 12:21:18.107134  506775 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-069465
	I0819 12:21:18.126799  506775 main.go:141] libmachine: Using SSH client type: native
	I0819 12:21:18.127105  506775 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e49d0] 0x3e7230 <nil>  [] 0s} 127.0.0.1 33438 <nil> <nil>}
	I0819 12:21:18.127119  506775 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-069465 && echo "no-preload-069465" | sudo tee /etc/hostname
	I0819 12:21:18.271522  506775 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-069465
	
	I0819 12:21:18.271600  506775 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-069465
	I0819 12:21:18.290681  506775 main.go:141] libmachine: Using SSH client type: native
	I0819 12:21:18.290974  506775 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e49d0] 0x3e7230 <nil>  [] 0s} 127.0.0.1 33438 <nil> <nil>}
	I0819 12:21:18.290997  506775 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-069465' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-069465/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-069465' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0819 12:21:18.423388  506775 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0819 12:21:18.423419  506775 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19476-293809/.minikube CaCertPath:/home/jenkins/minikube-integration/19476-293809/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19476-293809/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19476-293809/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19476-293809/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19476-293809/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19476-293809/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19476-293809/.minikube}
	I0819 12:21:18.423438  506775 ubuntu.go:177] setting up certificates
	I0819 12:21:18.423448  506775 provision.go:84] configureAuth start
	I0819 12:21:18.423507  506775 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-069465
	I0819 12:21:18.441564  506775 provision.go:143] copyHostCerts
	I0819 12:21:18.441632  506775 exec_runner.go:144] found /home/jenkins/minikube-integration/19476-293809/.minikube/ca.pem, removing ...
	I0819 12:21:18.441640  506775 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19476-293809/.minikube/ca.pem
	I0819 12:21:18.441742  506775 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19476-293809/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19476-293809/.minikube/ca.pem (1082 bytes)
	I0819 12:21:18.441863  506775 exec_runner.go:144] found /home/jenkins/minikube-integration/19476-293809/.minikube/cert.pem, removing ...
	I0819 12:21:18.441869  506775 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19476-293809/.minikube/cert.pem
	I0819 12:21:18.441900  506775 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19476-293809/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19476-293809/.minikube/cert.pem (1123 bytes)
	I0819 12:21:18.441966  506775 exec_runner.go:144] found /home/jenkins/minikube-integration/19476-293809/.minikube/key.pem, removing ...
	I0819 12:21:18.441970  506775 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19476-293809/.minikube/key.pem
	I0819 12:21:18.441993  506775 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19476-293809/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19476-293809/.minikube/key.pem (1675 bytes)
	I0819 12:21:18.442045  506775 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19476-293809/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19476-293809/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19476-293809/.minikube/certs/ca-key.pem org=jenkins.no-preload-069465 san=[127.0.0.1 192.168.76.2 localhost minikube no-preload-069465]
	I0819 12:21:18.872510  506775 provision.go:177] copyRemoteCerts
	I0819 12:21:18.872585  506775 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0819 12:21:18.872647  506775 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-069465
	I0819 12:21:18.889840  506775 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33438 SSHKeyPath:/home/jenkins/minikube-integration/19476-293809/.minikube/machines/no-preload-069465/id_rsa Username:docker}
	I0819 12:21:18.984369  506775 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19476-293809/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0819 12:21:19.012951  506775 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19476-293809/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0819 12:21:19.046231  506775 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19476-293809/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0819 12:21:19.078679  506775 provision.go:87] duration metric: took 655.214969ms to configureAuth
	I0819 12:21:19.078711  506775 ubuntu.go:193] setting minikube options for container-runtime
	I0819 12:21:19.078982  506775 config.go:182] Loaded profile config "no-preload-069465": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.0
	I0819 12:21:19.078995  506775 machine.go:96] duration metric: took 4.131080275s to provisionDockerMachine
	I0819 12:21:19.079009  506775 start.go:293] postStartSetup for "no-preload-069465" (driver="docker")
	I0819 12:21:19.079024  506775 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0819 12:21:19.079081  506775 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0819 12:21:19.079131  506775 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-069465
	I0819 12:21:19.102101  506775 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33438 SSHKeyPath:/home/jenkins/minikube-integration/19476-293809/.minikube/machines/no-preload-069465/id_rsa Username:docker}
	I0819 12:21:15.239483  501046 pod_ready.go:103] pod "metrics-server-9975d5f86-zb7nt" in "kube-system" namespace has status "Ready":"False"
	I0819 12:21:17.732425  501046 pod_ready.go:103] pod "metrics-server-9975d5f86-zb7nt" in "kube-system" namespace has status "Ready":"False"
	I0819 12:21:19.734060  501046 pod_ready.go:103] pod "metrics-server-9975d5f86-zb7nt" in "kube-system" namespace has status "Ready":"False"
	I0819 12:21:19.196212  506775 ssh_runner.go:195] Run: cat /etc/os-release
	I0819 12:21:19.199470  506775 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0819 12:21:19.199505  506775 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0819 12:21:19.199515  506775 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0819 12:21:19.199522  506775 info.go:137] Remote host: Ubuntu 22.04.4 LTS
	I0819 12:21:19.199532  506775 filesync.go:126] Scanning /home/jenkins/minikube-integration/19476-293809/.minikube/addons for local assets ...
	I0819 12:21:19.199595  506775 filesync.go:126] Scanning /home/jenkins/minikube-integration/19476-293809/.minikube/files for local assets ...
	I0819 12:21:19.199676  506775 filesync.go:149] local asset: /home/jenkins/minikube-integration/19476-293809/.minikube/files/etc/ssl/certs/2991912.pem -> 2991912.pem in /etc/ssl/certs
	I0819 12:21:19.199783  506775 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0819 12:21:19.209575  506775 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19476-293809/.minikube/files/etc/ssl/certs/2991912.pem --> /etc/ssl/certs/2991912.pem (1708 bytes)
	I0819 12:21:19.238079  506775 start.go:296] duration metric: took 159.050201ms for postStartSetup
	I0819 12:21:19.238220  506775 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0819 12:21:19.238297  506775 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-069465
	I0819 12:21:19.254505  506775 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33438 SSHKeyPath:/home/jenkins/minikube-integration/19476-293809/.minikube/machines/no-preload-069465/id_rsa Username:docker}
	I0819 12:21:19.351563  506775 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0819 12:21:19.360292  506775 fix.go:56] duration metric: took 4.859525011s for fixHost
	I0819 12:21:19.360318  506775 start.go:83] releasing machines lock for "no-preload-069465", held for 4.859582314s
	I0819 12:21:19.360392  506775 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-069465
	I0819 12:21:19.381813  506775 ssh_runner.go:195] Run: cat /version.json
	I0819 12:21:19.381836  506775 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0819 12:21:19.381865  506775 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-069465
	I0819 12:21:19.381903  506775 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-069465
	I0819 12:21:19.403839  506775 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33438 SSHKeyPath:/home/jenkins/minikube-integration/19476-293809/.minikube/machines/no-preload-069465/id_rsa Username:docker}
	I0819 12:21:19.426988  506775 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33438 SSHKeyPath:/home/jenkins/minikube-integration/19476-293809/.minikube/machines/no-preload-069465/id_rsa Username:docker}
	I0819 12:21:19.499043  506775 ssh_runner.go:195] Run: systemctl --version
	I0819 12:21:19.643415  506775 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0819 12:21:19.648015  506775 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0819 12:21:19.666540  506775 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0819 12:21:19.666664  506775 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0819 12:21:19.675954  506775 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0819 12:21:19.675978  506775 start.go:495] detecting cgroup driver to use...
	I0819 12:21:19.676013  506775 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0819 12:21:19.676061  506775 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0819 12:21:19.691503  506775 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0819 12:21:19.706340  506775 docker.go:217] disabling cri-docker service (if available) ...
	I0819 12:21:19.706406  506775 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0819 12:21:19.720248  506775 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0819 12:21:19.734455  506775 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0819 12:21:19.840085  506775 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0819 12:21:19.927750  506775 docker.go:233] disabling docker service ...
	I0819 12:21:19.927818  506775 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0819 12:21:19.940942  506775 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0819 12:21:19.953305  506775 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0819 12:21:20.054159  506775 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0819 12:21:20.154286  506775 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0819 12:21:20.167787  506775 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0819 12:21:20.186962  506775 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0819 12:21:20.199070  506775 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0819 12:21:20.210867  506775 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0819 12:21:20.211053  506775 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0819 12:21:20.221971  506775 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0819 12:21:20.235823  506775 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0819 12:21:20.246958  506775 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0819 12:21:20.258084  506775 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0819 12:21:20.268642  506775 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0819 12:21:20.279881  506775 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0819 12:21:20.290239  506775 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0819 12:21:20.301150  506775 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0819 12:21:20.311508  506775 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0819 12:21:20.320469  506775 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 12:21:20.418565  506775 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0819 12:21:20.587635  506775 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I0819 12:21:20.587727  506775 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0819 12:21:20.592062  506775 start.go:563] Will wait 60s for crictl version
	I0819 12:21:20.592132  506775 ssh_runner.go:195] Run: which crictl
	I0819 12:21:20.597888  506775 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0819 12:21:20.637552  506775 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.7.20
	RuntimeApiVersion:  v1
	I0819 12:21:20.637624  506775 ssh_runner.go:195] Run: containerd --version
	I0819 12:21:20.671301  506775 ssh_runner.go:195] Run: containerd --version
	I0819 12:21:20.699616  506775 out.go:177] * Preparing Kubernetes v1.31.0 on containerd 1.7.20 ...
	I0819 12:21:20.701746  506775 cli_runner.go:164] Run: docker network inspect no-preload-069465 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0819 12:21:20.716462  506775 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I0819 12:21:20.720138  506775 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0819 12:21:20.732351  506775 kubeadm.go:883] updating cluster {Name:no-preload-069465 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:no-preload-069465 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenk
ins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0819 12:21:20.732487  506775 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime containerd
	I0819 12:21:20.732535  506775 ssh_runner.go:195] Run: sudo crictl images --output json
	I0819 12:21:20.782773  506775 containerd.go:627] all images are preloaded for containerd runtime.
	I0819 12:21:20.782796  506775 cache_images.go:84] Images are preloaded, skipping loading
	I0819 12:21:20.782804  506775 kubeadm.go:934] updating node { 192.168.76.2 8443 v1.31.0 containerd true true} ...
	I0819 12:21:20.782967  506775 kubeadm.go:946] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-069465 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:no-preload-069465 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0819 12:21:20.783031  506775 ssh_runner.go:195] Run: sudo crictl info
	I0819 12:21:20.824218  506775 cni.go:84] Creating CNI manager for ""
	I0819 12:21:20.824243  506775 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0819 12:21:20.824253  506775 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0819 12:21:20.824274  506775 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-069465 NodeName:no-preload-069465 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0819 12:21:20.824414  506775 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "no-preload-069465"
	  kubeletExtraArgs:
	    node-ip: 192.168.76.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0819 12:21:20.824528  506775 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0819 12:21:20.834616  506775 binaries.go:44] Found k8s binaries, skipping transfer
	I0819 12:21:20.834688  506775 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0819 12:21:20.843761  506775 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (321 bytes)
	I0819 12:21:20.862603  506775 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0819 12:21:20.881729  506775 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2171 bytes)
	I0819 12:21:20.911132  506775 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I0819 12:21:20.914536  506775 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0819 12:21:20.925539  506775 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 12:21:21.019962  506775 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0819 12:21:21.038706  506775 certs.go:68] Setting up /home/jenkins/minikube-integration/19476-293809/.minikube/profiles/no-preload-069465 for IP: 192.168.76.2
	I0819 12:21:21.038774  506775 certs.go:194] generating shared ca certs ...
	I0819 12:21:21.038805  506775 certs.go:226] acquiring lock for ca certs: {Name:mkf168e715338554e93ce93584b85aca19a124a7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 12:21:21.039043  506775 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19476-293809/.minikube/ca.key
	I0819 12:21:21.039143  506775 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19476-293809/.minikube/proxy-client-ca.key
	I0819 12:21:21.039168  506775 certs.go:256] generating profile certs ...
	I0819 12:21:21.039296  506775 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19476-293809/.minikube/profiles/no-preload-069465/client.key
	I0819 12:21:21.039409  506775 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19476-293809/.minikube/profiles/no-preload-069465/apiserver.key.6272a071
	I0819 12:21:21.039487  506775 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19476-293809/.minikube/profiles/no-preload-069465/proxy-client.key
	I0819 12:21:21.039627  506775 certs.go:484] found cert: /home/jenkins/minikube-integration/19476-293809/.minikube/certs/299191.pem (1338 bytes)
	W0819 12:21:21.039698  506775 certs.go:480] ignoring /home/jenkins/minikube-integration/19476-293809/.minikube/certs/299191_empty.pem, impossibly tiny 0 bytes
	I0819 12:21:21.039727  506775 certs.go:484] found cert: /home/jenkins/minikube-integration/19476-293809/.minikube/certs/ca-key.pem (1675 bytes)
	I0819 12:21:21.039767  506775 certs.go:484] found cert: /home/jenkins/minikube-integration/19476-293809/.minikube/certs/ca.pem (1082 bytes)
	I0819 12:21:21.039810  506775 certs.go:484] found cert: /home/jenkins/minikube-integration/19476-293809/.minikube/certs/cert.pem (1123 bytes)
	I0819 12:21:21.039851  506775 certs.go:484] found cert: /home/jenkins/minikube-integration/19476-293809/.minikube/certs/key.pem (1675 bytes)
	I0819 12:21:21.039922  506775 certs.go:484] found cert: /home/jenkins/minikube-integration/19476-293809/.minikube/files/etc/ssl/certs/2991912.pem (1708 bytes)
	I0819 12:21:21.040587  506775 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19476-293809/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0819 12:21:21.069495  506775 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19476-293809/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0819 12:21:21.101241  506775 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19476-293809/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0819 12:21:21.131794  506775 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19476-293809/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0819 12:21:21.158664  506775 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19476-293809/.minikube/profiles/no-preload-069465/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0819 12:21:21.195380  506775 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19476-293809/.minikube/profiles/no-preload-069465/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0819 12:21:21.239837  506775 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19476-293809/.minikube/profiles/no-preload-069465/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0819 12:21:21.277781  506775 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19476-293809/.minikube/profiles/no-preload-069465/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0819 12:21:21.325184  506775 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19476-293809/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0819 12:21:21.368916  506775 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19476-293809/.minikube/certs/299191.pem --> /usr/share/ca-certificates/299191.pem (1338 bytes)
	I0819 12:21:21.420910  506775 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19476-293809/.minikube/files/etc/ssl/certs/2991912.pem --> /usr/share/ca-certificates/2991912.pem (1708 bytes)
	I0819 12:21:21.454029  506775 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0819 12:21:21.475334  506775 ssh_runner.go:195] Run: openssl version
	I0819 12:21:21.482589  506775 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2991912.pem && ln -fs /usr/share/ca-certificates/2991912.pem /etc/ssl/certs/2991912.pem"
	I0819 12:21:21.493984  506775 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2991912.pem
	I0819 12:21:21.497882  506775 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 19 11:42 /usr/share/ca-certificates/2991912.pem
	I0819 12:21:21.497942  506775 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2991912.pem
	I0819 12:21:21.506059  506775 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2991912.pem /etc/ssl/certs/3ec20f2e.0"
	I0819 12:21:21.515639  506775 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0819 12:21:21.526078  506775 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0819 12:21:21.529957  506775 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 19 11:32 /usr/share/ca-certificates/minikubeCA.pem
	I0819 12:21:21.530076  506775 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0819 12:21:21.537420  506775 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0819 12:21:21.547095  506775 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/299191.pem && ln -fs /usr/share/ca-certificates/299191.pem /etc/ssl/certs/299191.pem"
	I0819 12:21:21.556556  506775 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/299191.pem
	I0819 12:21:21.559967  506775 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 19 11:42 /usr/share/ca-certificates/299191.pem
	I0819 12:21:21.560033  506775 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/299191.pem
	I0819 12:21:21.567015  506775 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/299191.pem /etc/ssl/certs/51391683.0"
	I0819 12:21:21.578342  506775 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0819 12:21:21.582185  506775 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0819 12:21:21.589374  506775 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0819 12:21:21.596876  506775 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0819 12:21:21.603984  506775 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0819 12:21:21.611421  506775 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0819 12:21:21.618648  506775 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0819 12:21:21.626360  506775 kubeadm.go:392] StartCluster: {Name:no-preload-069465 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:no-preload-069465 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins
:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 12:21:21.626473  506775 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0819 12:21:21.626540  506775 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0819 12:21:21.676751  506775 cri.go:89] found id: "124128cbd1e5d2b757d30236510a2f7775b5f133017bdb53fc286fa7673de17c"
	I0819 12:21:21.676826  506775 cri.go:89] found id: "597e793835a348134dae58cb28cbf8c7c5afb6b55ab77c57a898610b0036ab29"
	I0819 12:21:21.676844  506775 cri.go:89] found id: "7617a1768128b41f6dcaf96d6b1fcf0b5c8de9d5f467e91c5d8178ff86a810da"
	I0819 12:21:21.676859  506775 cri.go:89] found id: "0bb7433777c470db1fa11e0481317ae603e85048c1e3b61a8cb0df0509de2f96"
	I0819 12:21:21.676898  506775 cri.go:89] found id: "1221ed0d57efc908de20b1a0bfc703ea95699d6db4f60b5d2d2fc087d37714bf"
	I0819 12:21:21.676933  506775 cri.go:89] found id: "3f438763d4391ba5c2045a523f101f3454bd109ac70f3dd614d83dc21b9100d7"
	I0819 12:21:21.676959  506775 cri.go:89] found id: "3adc6cce17e637725838624966046f6786a930b68fc69c98675834705549597f"
	I0819 12:21:21.676976  506775 cri.go:89] found id: "80421881617fc159def3bb11f59a8c750e63de334013cda4ebb677a69409f401"
	I0819 12:21:21.676989  506775 cri.go:89] found id: ""
	I0819 12:21:21.677064  506775 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I0819 12:21:21.693080  506775 cri.go:116] JSON = null
	W0819 12:21:21.693173  506775 kubeadm.go:399] unpause failed: list paused: list returned 0 containers, but ps returned 8
	I0819 12:21:21.693282  506775 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0819 12:21:21.707286  506775 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0819 12:21:21.707316  506775 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0819 12:21:21.707368  506775 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0819 12:21:21.719180  506775 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0819 12:21:21.719842  506775 kubeconfig.go:47] verify endpoint returned: get endpoint: "no-preload-069465" does not appear in /home/jenkins/minikube-integration/19476-293809/kubeconfig
	I0819 12:21:21.720136  506775 kubeconfig.go:62] /home/jenkins/minikube-integration/19476-293809/kubeconfig needs updating (will repair): [kubeconfig missing "no-preload-069465" cluster setting kubeconfig missing "no-preload-069465" context setting]
	I0819 12:21:21.720672  506775 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19476-293809/kubeconfig: {Name:mk83cf1ee61353d940dd326434ad6e97ed986eab Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 12:21:21.722078  506775 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0819 12:21:21.750869  506775 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.76.2
	I0819 12:21:21.750934  506775 kubeadm.go:597] duration metric: took 43.610747ms to restartPrimaryControlPlane
	I0819 12:21:21.750943  506775 kubeadm.go:394] duration metric: took 124.599126ms to StartCluster
	I0819 12:21:21.750959  506775 settings.go:142] acquiring lock: {Name:mkc4435b6c8d62b9d001c06e85eb76d8e377373c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 12:21:21.751025  506775 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19476-293809/kubeconfig
	I0819 12:21:21.752049  506775 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19476-293809/kubeconfig: {Name:mk83cf1ee61353d940dd326434ad6e97ed986eab Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 12:21:21.752314  506775 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0819 12:21:21.752430  506775 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0819 12:21:21.752892  506775 addons.go:69] Setting storage-provisioner=true in profile "no-preload-069465"
	I0819 12:21:21.752924  506775 addons.go:234] Setting addon storage-provisioner=true in "no-preload-069465"
	W0819 12:21:21.752933  506775 addons.go:243] addon storage-provisioner should already be in state true
	I0819 12:21:21.752956  506775 host.go:66] Checking if "no-preload-069465" exists ...
	I0819 12:21:21.753446  506775 cli_runner.go:164] Run: docker container inspect no-preload-069465 --format={{.State.Status}}
	I0819 12:21:21.753782  506775 addons.go:69] Setting dashboard=true in profile "no-preload-069465"
	I0819 12:21:21.753834  506775 addons.go:234] Setting addon dashboard=true in "no-preload-069465"
	W0819 12:21:21.753849  506775 addons.go:243] addon dashboard should already be in state true
	I0819 12:21:21.753878  506775 host.go:66] Checking if "no-preload-069465" exists ...
	I0819 12:21:21.754110  506775 addons.go:69] Setting default-storageclass=true in profile "no-preload-069465"
	I0819 12:21:21.754185  506775 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-069465"
	I0819 12:21:21.754374  506775 cli_runner.go:164] Run: docker container inspect no-preload-069465 --format={{.State.Status}}
	I0819 12:21:21.754521  506775 cli_runner.go:164] Run: docker container inspect no-preload-069465 --format={{.State.Status}}
	I0819 12:21:21.755433  506775 addons.go:69] Setting metrics-server=true in profile "no-preload-069465"
	I0819 12:21:21.755468  506775 addons.go:234] Setting addon metrics-server=true in "no-preload-069465"
	W0819 12:21:21.755474  506775 addons.go:243] addon metrics-server should already be in state true
	I0819 12:21:21.755520  506775 host.go:66] Checking if "no-preload-069465" exists ...
	I0819 12:21:21.755979  506775 cli_runner.go:164] Run: docker container inspect no-preload-069465 --format={{.State.Status}}
	I0819 12:21:21.752606  506775 config.go:182] Loaded profile config "no-preload-069465": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.0
	I0819 12:21:21.760373  506775 out.go:177] * Verifying Kubernetes components...
	I0819 12:21:21.774360  506775 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 12:21:21.805688  506775 addons.go:234] Setting addon default-storageclass=true in "no-preload-069465"
	W0819 12:21:21.805711  506775 addons.go:243] addon default-storageclass should already be in state true
	I0819 12:21:21.805742  506775 host.go:66] Checking if "no-preload-069465" exists ...
	I0819 12:21:21.806175  506775 cli_runner.go:164] Run: docker container inspect no-preload-069465 --format={{.State.Status}}
	I0819 12:21:21.836666  506775 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0819 12:21:21.839027  506775 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0819 12:21:21.839051  506775 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0819 12:21:21.839111  506775 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-069465
	I0819 12:21:21.845285  506775 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I0819 12:21:21.847176  506775 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0819 12:21:21.850806  506775 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0819 12:21:21.851020  506775 addons.go:431] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0819 12:21:21.851042  506775 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0819 12:21:21.851107  506775 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-069465
	I0819 12:21:21.852757  506775 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0819 12:21:21.852784  506775 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0819 12:21:21.852848  506775 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-069465
	I0819 12:21:21.883106  506775 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33438 SSHKeyPath:/home/jenkins/minikube-integration/19476-293809/.minikube/machines/no-preload-069465/id_rsa Username:docker}
	I0819 12:21:21.911167  506775 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0819 12:21:21.911187  506775 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0819 12:21:21.911254  506775 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-069465
	I0819 12:21:21.929725  506775 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33438 SSHKeyPath:/home/jenkins/minikube-integration/19476-293809/.minikube/machines/no-preload-069465/id_rsa Username:docker}
	I0819 12:21:21.929900  506775 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33438 SSHKeyPath:/home/jenkins/minikube-integration/19476-293809/.minikube/machines/no-preload-069465/id_rsa Username:docker}
	I0819 12:21:21.958159  506775 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33438 SSHKeyPath:/home/jenkins/minikube-integration/19476-293809/.minikube/machines/no-preload-069465/id_rsa Username:docker}
	I0819 12:21:22.028804  506775 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0819 12:21:22.071691  506775 node_ready.go:35] waiting up to 6m0s for node "no-preload-069465" to be "Ready" ...
	I0819 12:21:22.140563  506775 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0819 12:21:22.156808  506775 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0819 12:21:22.234279  506775 addons.go:431] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0819 12:21:22.234345  506775 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0819 12:21:22.279693  506775 addons.go:431] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0819 12:21:22.279763  506775 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0819 12:21:22.307901  506775 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0819 12:21:22.307974  506775 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0819 12:21:22.432251  506775 addons.go:431] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0819 12:21:22.432326  506775 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	W0819 12:21:22.467567  506775 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0819 12:21:22.467603  506775 retry.go:31] will retry after 143.428087ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0819 12:21:22.476451  506775 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0819 12:21:22.476526  506775 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0819 12:21:22.587237  506775 addons.go:431] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0819 12:21:22.587328  506775 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0819 12:21:22.611677  506775 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0819 12:21:22.657274  506775 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0819 12:21:22.657354  506775 retry.go:31] will retry after 326.026692ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0819 12:21:22.661810  506775 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0819 12:21:22.661837  506775 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0819 12:21:22.735628  506775 addons.go:431] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0819 12:21:22.735658  506775 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0819 12:21:22.801533  506775 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0819 12:21:22.858274  506775 addons.go:431] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0819 12:21:22.858369  506775 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0819 12:21:22.983650  506775 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0819 12:21:23.029229  506775 addons.go:431] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0819 12:21:23.029306  506775 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0819 12:21:23.145016  506775 addons.go:431] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0819 12:21:23.145092  506775 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0819 12:21:23.210144  506775 addons.go:431] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0819 12:21:23.210218  506775 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0819 12:21:23.279400  506775 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0819 12:21:22.232634  501046 pod_ready.go:103] pod "metrics-server-9975d5f86-zb7nt" in "kube-system" namespace has status "Ready":"False"
	I0819 12:21:24.732313  501046 pod_ready.go:103] pod "metrics-server-9975d5f86-zb7nt" in "kube-system" namespace has status "Ready":"False"
	I0819 12:21:26.315735  506775 node_ready.go:49] node "no-preload-069465" has status "Ready":"True"
	I0819 12:21:26.315765  506775 node_ready.go:38] duration metric: took 4.244039811s for node "no-preload-069465" to be "Ready" ...
	I0819 12:21:26.315775  506775 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0819 12:21:26.372137  506775 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-rs5bm" in "kube-system" namespace to be "Ready" ...
	I0819 12:21:26.692834  506775 pod_ready.go:93] pod "coredns-6f6b679f8f-rs5bm" in "kube-system" namespace has status "Ready":"True"
	I0819 12:21:26.692875  506775 pod_ready.go:82] duration metric: took 320.693511ms for pod "coredns-6f6b679f8f-rs5bm" in "kube-system" namespace to be "Ready" ...
	I0819 12:21:26.692888  506775 pod_ready.go:79] waiting up to 6m0s for pod "etcd-no-preload-069465" in "kube-system" namespace to be "Ready" ...
	I0819 12:21:26.720054  506775 pod_ready.go:93] pod "etcd-no-preload-069465" in "kube-system" namespace has status "Ready":"True"
	I0819 12:21:26.720090  506775 pod_ready.go:82] duration metric: took 27.193893ms for pod "etcd-no-preload-069465" in "kube-system" namespace to be "Ready" ...
	I0819 12:21:26.720105  506775 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-no-preload-069465" in "kube-system" namespace to be "Ready" ...
	I0819 12:21:26.736073  506775 pod_ready.go:93] pod "kube-apiserver-no-preload-069465" in "kube-system" namespace has status "Ready":"True"
	I0819 12:21:26.736115  506775 pod_ready.go:82] duration metric: took 16.0012ms for pod "kube-apiserver-no-preload-069465" in "kube-system" namespace to be "Ready" ...
	I0819 12:21:26.736127  506775 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-no-preload-069465" in "kube-system" namespace to be "Ready" ...
	I0819 12:21:26.748560  506775 pod_ready.go:93] pod "kube-controller-manager-no-preload-069465" in "kube-system" namespace has status "Ready":"True"
	I0819 12:21:26.748602  506775 pod_ready.go:82] duration metric: took 12.466305ms for pod "kube-controller-manager-no-preload-069465" in "kube-system" namespace to be "Ready" ...
	I0819 12:21:26.748615  506775 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-c2rx8" in "kube-system" namespace to be "Ready" ...
	I0819 12:21:26.760099  506775 pod_ready.go:93] pod "kube-proxy-c2rx8" in "kube-system" namespace has status "Ready":"True"
	I0819 12:21:26.760137  506775 pod_ready.go:82] duration metric: took 11.504703ms for pod "kube-proxy-c2rx8" in "kube-system" namespace to be "Ready" ...
	I0819 12:21:26.760150  506775 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-no-preload-069465" in "kube-system" namespace to be "Ready" ...
	I0819 12:21:26.921461  506775 pod_ready.go:93] pod "kube-scheduler-no-preload-069465" in "kube-system" namespace has status "Ready":"True"
	I0819 12:21:26.921496  506775 pod_ready.go:82] duration metric: took 161.337638ms for pod "kube-scheduler-no-preload-069465" in "kube-system" namespace to be "Ready" ...
	I0819 12:21:26.921524  506775 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-6867b74b74-pjh8s" in "kube-system" namespace to be "Ready" ...
	I0819 12:21:28.932568  506775 pod_ready.go:103] pod "metrics-server-6867b74b74-pjh8s" in "kube-system" namespace has status "Ready":"False"
	I0819 12:21:29.409982  506775 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: (6.798225786s)
	I0819 12:21:29.647571  506775 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (6.845995636s)
	I0819 12:21:29.647605  506775 addons.go:475] Verifying addon metrics-server=true in "no-preload-069465"
	I0819 12:21:29.647651  506775 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: (6.663926864s)
	I0819 12:21:29.647913  506775 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (6.368469273s)
	I0819 12:21:29.649831  506775 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p no-preload-069465 addons enable metrics-server
	
	I0819 12:21:29.659944  506775 out.go:177] * Enabled addons: storage-provisioner, metrics-server, dashboard, default-storageclass
	I0819 12:21:27.231949  501046 pod_ready.go:103] pod "metrics-server-9975d5f86-zb7nt" in "kube-system" namespace has status "Ready":"False"
	I0819 12:21:29.232464  501046 pod_ready.go:103] pod "metrics-server-9975d5f86-zb7nt" in "kube-system" namespace has status "Ready":"False"
	I0819 12:21:29.661670  506775 addons.go:510] duration metric: took 7.909234858s for enable addons: enabled=[storage-provisioner metrics-server dashboard default-storageclass]
	I0819 12:21:31.428698  506775 pod_ready.go:103] pod "metrics-server-6867b74b74-pjh8s" in "kube-system" namespace has status "Ready":"False"
	I0819 12:21:33.429023  506775 pod_ready.go:103] pod "metrics-server-6867b74b74-pjh8s" in "kube-system" namespace has status "Ready":"False"
	I0819 12:21:31.232512  501046 pod_ready.go:103] pod "metrics-server-9975d5f86-zb7nt" in "kube-system" namespace has status "Ready":"False"
	I0819 12:21:33.734258  501046 pod_ready.go:103] pod "metrics-server-9975d5f86-zb7nt" in "kube-system" namespace has status "Ready":"False"
	I0819 12:21:35.430439  506775 pod_ready.go:103] pod "metrics-server-6867b74b74-pjh8s" in "kube-system" namespace has status "Ready":"False"
	I0819 12:21:37.431560  506775 pod_ready.go:103] pod "metrics-server-6867b74b74-pjh8s" in "kube-system" namespace has status "Ready":"False"
	I0819 12:21:36.233780  501046 pod_ready.go:103] pod "metrics-server-9975d5f86-zb7nt" in "kube-system" namespace has status "Ready":"False"
	I0819 12:21:38.733849  501046 pod_ready.go:103] pod "metrics-server-9975d5f86-zb7nt" in "kube-system" namespace has status "Ready":"False"
	I0819 12:21:39.927610  506775 pod_ready.go:103] pod "metrics-server-6867b74b74-pjh8s" in "kube-system" namespace has status "Ready":"False"
	I0819 12:21:42.427298  506775 pod_ready.go:103] pod "metrics-server-6867b74b74-pjh8s" in "kube-system" namespace has status "Ready":"False"
	I0819 12:21:41.231720  501046 pod_ready.go:103] pod "metrics-server-9975d5f86-zb7nt" in "kube-system" namespace has status "Ready":"False"
	I0819 12:21:43.232362  501046 pod_ready.go:103] pod "metrics-server-9975d5f86-zb7nt" in "kube-system" namespace has status "Ready":"False"
	I0819 12:21:44.429262  506775 pod_ready.go:103] pod "metrics-server-6867b74b74-pjh8s" in "kube-system" namespace has status "Ready":"False"
	I0819 12:21:46.928458  506775 pod_ready.go:103] pod "metrics-server-6867b74b74-pjh8s" in "kube-system" namespace has status "Ready":"False"
	I0819 12:21:45.251210  501046 pod_ready.go:103] pod "metrics-server-9975d5f86-zb7nt" in "kube-system" namespace has status "Ready":"False"
	I0819 12:21:47.732330  501046 pod_ready.go:103] pod "metrics-server-9975d5f86-zb7nt" in "kube-system" namespace has status "Ready":"False"
	I0819 12:21:49.753397  501046 pod_ready.go:103] pod "metrics-server-9975d5f86-zb7nt" in "kube-system" namespace has status "Ready":"False"
	I0819 12:21:49.427883  506775 pod_ready.go:103] pod "metrics-server-6867b74b74-pjh8s" in "kube-system" namespace has status "Ready":"False"
	I0819 12:21:51.927909  506775 pod_ready.go:103] pod "metrics-server-6867b74b74-pjh8s" in "kube-system" namespace has status "Ready":"False"
	I0819 12:21:52.231776  501046 pod_ready.go:103] pod "metrics-server-9975d5f86-zb7nt" in "kube-system" namespace has status "Ready":"False"
	I0819 12:21:54.232224  501046 pod_ready.go:103] pod "metrics-server-9975d5f86-zb7nt" in "kube-system" namespace has status "Ready":"False"
	I0819 12:21:54.428344  506775 pod_ready.go:103] pod "metrics-server-6867b74b74-pjh8s" in "kube-system" namespace has status "Ready":"False"
	I0819 12:21:56.928244  506775 pod_ready.go:103] pod "metrics-server-6867b74b74-pjh8s" in "kube-system" namespace has status "Ready":"False"
	I0819 12:21:56.234069  501046 pod_ready.go:103] pod "metrics-server-9975d5f86-zb7nt" in "kube-system" namespace has status "Ready":"False"
	I0819 12:21:58.731973  501046 pod_ready.go:103] pod "metrics-server-9975d5f86-zb7nt" in "kube-system" namespace has status "Ready":"False"
	I0819 12:21:59.427309  506775 pod_ready.go:103] pod "metrics-server-6867b74b74-pjh8s" in "kube-system" namespace has status "Ready":"False"
	I0819 12:22:01.429103  506775 pod_ready.go:103] pod "metrics-server-6867b74b74-pjh8s" in "kube-system" namespace has status "Ready":"False"
	I0819 12:22:03.928387  506775 pod_ready.go:103] pod "metrics-server-6867b74b74-pjh8s" in "kube-system" namespace has status "Ready":"False"
	I0819 12:22:01.232467  501046 pod_ready.go:103] pod "metrics-server-9975d5f86-zb7nt" in "kube-system" namespace has status "Ready":"False"
	I0819 12:22:03.232512  501046 pod_ready.go:103] pod "metrics-server-9975d5f86-zb7nt" in "kube-system" namespace has status "Ready":"False"
	I0819 12:22:06.428402  506775 pod_ready.go:103] pod "metrics-server-6867b74b74-pjh8s" in "kube-system" namespace has status "Ready":"False"
	I0819 12:22:08.928335  506775 pod_ready.go:103] pod "metrics-server-6867b74b74-pjh8s" in "kube-system" namespace has status "Ready":"False"
	I0819 12:22:05.731717  501046 pod_ready.go:103] pod "metrics-server-9975d5f86-zb7nt" in "kube-system" namespace has status "Ready":"False"
	I0819 12:22:07.732883  501046 pod_ready.go:103] pod "metrics-server-9975d5f86-zb7nt" in "kube-system" namespace has status "Ready":"False"
	I0819 12:22:09.735197  501046 pod_ready.go:103] pod "metrics-server-9975d5f86-zb7nt" in "kube-system" namespace has status "Ready":"False"
	I0819 12:22:11.431229  506775 pod_ready.go:103] pod "metrics-server-6867b74b74-pjh8s" in "kube-system" namespace has status "Ready":"False"
	I0819 12:22:13.928015  506775 pod_ready.go:103] pod "metrics-server-6867b74b74-pjh8s" in "kube-system" namespace has status "Ready":"False"
	I0819 12:22:12.232783  501046 pod_ready.go:103] pod "metrics-server-9975d5f86-zb7nt" in "kube-system" namespace has status "Ready":"False"
	I0819 12:22:14.731676  501046 pod_ready.go:103] pod "metrics-server-9975d5f86-zb7nt" in "kube-system" namespace has status "Ready":"False"
	I0819 12:22:16.428302  506775 pod_ready.go:103] pod "metrics-server-6867b74b74-pjh8s" in "kube-system" namespace has status "Ready":"False"
	I0819 12:22:18.928002  506775 pod_ready.go:103] pod "metrics-server-6867b74b74-pjh8s" in "kube-system" namespace has status "Ready":"False"
	I0819 12:22:16.732169  501046 pod_ready.go:103] pod "metrics-server-9975d5f86-zb7nt" in "kube-system" namespace has status "Ready":"False"
	I0819 12:22:18.732304  501046 pod_ready.go:103] pod "metrics-server-9975d5f86-zb7nt" in "kube-system" namespace has status "Ready":"False"
	I0819 12:22:21.428229  506775 pod_ready.go:103] pod "metrics-server-6867b74b74-pjh8s" in "kube-system" namespace has status "Ready":"False"
	I0819 12:22:23.429001  506775 pod_ready.go:103] pod "metrics-server-6867b74b74-pjh8s" in "kube-system" namespace has status "Ready":"False"
	I0819 12:22:21.232625  501046 pod_ready.go:103] pod "metrics-server-9975d5f86-zb7nt" in "kube-system" namespace has status "Ready":"False"
	I0819 12:22:23.233631  501046 pod_ready.go:103] pod "metrics-server-9975d5f86-zb7nt" in "kube-system" namespace has status "Ready":"False"
	I0819 12:22:25.928961  506775 pod_ready.go:103] pod "metrics-server-6867b74b74-pjh8s" in "kube-system" namespace has status "Ready":"False"
	I0819 12:22:28.432521  506775 pod_ready.go:103] pod "metrics-server-6867b74b74-pjh8s" in "kube-system" namespace has status "Ready":"False"
	I0819 12:22:25.732121  501046 pod_ready.go:103] pod "metrics-server-9975d5f86-zb7nt" in "kube-system" namespace has status "Ready":"False"
	I0819 12:22:27.732549  501046 pod_ready.go:103] pod "metrics-server-9975d5f86-zb7nt" in "kube-system" namespace has status "Ready":"False"
	I0819 12:22:30.433679  506775 pod_ready.go:103] pod "metrics-server-6867b74b74-pjh8s" in "kube-system" namespace has status "Ready":"False"
	I0819 12:22:32.927662  506775 pod_ready.go:103] pod "metrics-server-6867b74b74-pjh8s" in "kube-system" namespace has status "Ready":"False"
	I0819 12:22:30.232291  501046 pod_ready.go:103] pod "metrics-server-9975d5f86-zb7nt" in "kube-system" namespace has status "Ready":"False"
	I0819 12:22:32.232836  501046 pod_ready.go:103] pod "metrics-server-9975d5f86-zb7nt" in "kube-system" namespace has status "Ready":"False"
	I0819 12:22:34.732648  501046 pod_ready.go:103] pod "metrics-server-9975d5f86-zb7nt" in "kube-system" namespace has status "Ready":"False"
	I0819 12:22:34.927708  506775 pod_ready.go:103] pod "metrics-server-6867b74b74-pjh8s" in "kube-system" namespace has status "Ready":"False"
	I0819 12:22:36.928096  506775 pod_ready.go:103] pod "metrics-server-6867b74b74-pjh8s" in "kube-system" namespace has status "Ready":"False"
	I0819 12:22:37.232237  501046 pod_ready.go:103] pod "metrics-server-9975d5f86-zb7nt" in "kube-system" namespace has status "Ready":"False"
	I0819 12:22:39.232533  501046 pod_ready.go:103] pod "metrics-server-9975d5f86-zb7nt" in "kube-system" namespace has status "Ready":"False"
	I0819 12:22:39.429383  506775 pod_ready.go:103] pod "metrics-server-6867b74b74-pjh8s" in "kube-system" namespace has status "Ready":"False"
	I0819 12:22:41.928004  506775 pod_ready.go:103] pod "metrics-server-6867b74b74-pjh8s" in "kube-system" namespace has status "Ready":"False"
	I0819 12:22:43.928330  506775 pod_ready.go:103] pod "metrics-server-6867b74b74-pjh8s" in "kube-system" namespace has status "Ready":"False"
	I0819 12:22:41.731978  501046 pod_ready.go:103] pod "metrics-server-9975d5f86-zb7nt" in "kube-system" namespace has status "Ready":"False"
	I0819 12:22:43.740253  501046 pod_ready.go:103] pod "metrics-server-9975d5f86-zb7nt" in "kube-system" namespace has status "Ready":"False"
	I0819 12:22:46.427694  506775 pod_ready.go:103] pod "metrics-server-6867b74b74-pjh8s" in "kube-system" namespace has status "Ready":"False"
	I0819 12:22:48.429456  506775 pod_ready.go:103] pod "metrics-server-6867b74b74-pjh8s" in "kube-system" namespace has status "Ready":"False"
	I0819 12:22:46.231992  501046 pod_ready.go:103] pod "metrics-server-9975d5f86-zb7nt" in "kube-system" namespace has status "Ready":"False"
	I0819 12:22:48.232118  501046 pod_ready.go:103] pod "metrics-server-9975d5f86-zb7nt" in "kube-system" namespace has status "Ready":"False"
	I0819 12:22:50.430645  506775 pod_ready.go:103] pod "metrics-server-6867b74b74-pjh8s" in "kube-system" namespace has status "Ready":"False"
	I0819 12:22:52.927949  506775 pod_ready.go:103] pod "metrics-server-6867b74b74-pjh8s" in "kube-system" namespace has status "Ready":"False"
	I0819 12:22:50.232302  501046 pod_ready.go:103] pod "metrics-server-9975d5f86-zb7nt" in "kube-system" namespace has status "Ready":"False"
	I0819 12:22:52.232653  501046 pod_ready.go:103] pod "metrics-server-9975d5f86-zb7nt" in "kube-system" namespace has status "Ready":"False"
	I0819 12:22:54.233206  501046 pod_ready.go:103] pod "metrics-server-9975d5f86-zb7nt" in "kube-system" namespace has status "Ready":"False"
	I0819 12:22:55.427336  506775 pod_ready.go:103] pod "metrics-server-6867b74b74-pjh8s" in "kube-system" namespace has status "Ready":"False"
	I0819 12:22:57.428312  506775 pod_ready.go:103] pod "metrics-server-6867b74b74-pjh8s" in "kube-system" namespace has status "Ready":"False"
	I0819 12:22:56.732617  501046 pod_ready.go:103] pod "metrics-server-9975d5f86-zb7nt" in "kube-system" namespace has status "Ready":"False"
	I0819 12:22:59.232475  501046 pod_ready.go:103] pod "metrics-server-9975d5f86-zb7nt" in "kube-system" namespace has status "Ready":"False"
	I0819 12:22:59.928859  506775 pod_ready.go:103] pod "metrics-server-6867b74b74-pjh8s" in "kube-system" namespace has status "Ready":"False"
	I0819 12:23:02.429066  506775 pod_ready.go:103] pod "metrics-server-6867b74b74-pjh8s" in "kube-system" namespace has status "Ready":"False"
	I0819 12:23:01.234267  501046 pod_ready.go:103] pod "metrics-server-9975d5f86-zb7nt" in "kube-system" namespace has status "Ready":"False"
	I0819 12:23:03.732615  501046 pod_ready.go:103] pod "metrics-server-9975d5f86-zb7nt" in "kube-system" namespace has status "Ready":"False"
	I0819 12:23:04.928246  506775 pod_ready.go:103] pod "metrics-server-6867b74b74-pjh8s" in "kube-system" namespace has status "Ready":"False"
	I0819 12:23:07.427455  506775 pod_ready.go:103] pod "metrics-server-6867b74b74-pjh8s" in "kube-system" namespace has status "Ready":"False"
	I0819 12:23:06.232841  501046 pod_ready.go:103] pod "metrics-server-9975d5f86-zb7nt" in "kube-system" namespace has status "Ready":"False"
	I0819 12:23:08.233188  501046 pod_ready.go:103] pod "metrics-server-9975d5f86-zb7nt" in "kube-system" namespace has status "Ready":"False"
	I0819 12:23:09.428078  506775 pod_ready.go:103] pod "metrics-server-6867b74b74-pjh8s" in "kube-system" namespace has status "Ready":"False"
	I0819 12:23:11.428186  506775 pod_ready.go:103] pod "metrics-server-6867b74b74-pjh8s" in "kube-system" namespace has status "Ready":"False"
	I0819 12:23:13.927830  506775 pod_ready.go:103] pod "metrics-server-6867b74b74-pjh8s" in "kube-system" namespace has status "Ready":"False"
	I0819 12:23:10.235912  501046 pod_ready.go:103] pod "metrics-server-9975d5f86-zb7nt" in "kube-system" namespace has status "Ready":"False"
	I0819 12:23:12.732867  501046 pod_ready.go:103] pod "metrics-server-9975d5f86-zb7nt" in "kube-system" namespace has status "Ready":"False"
	I0819 12:23:15.928342  506775 pod_ready.go:103] pod "metrics-server-6867b74b74-pjh8s" in "kube-system" namespace has status "Ready":"False"
	I0819 12:23:17.928456  506775 pod_ready.go:103] pod "metrics-server-6867b74b74-pjh8s" in "kube-system" namespace has status "Ready":"False"
	I0819 12:23:15.233354  501046 pod_ready.go:103] pod "metrics-server-9975d5f86-zb7nt" in "kube-system" namespace has status "Ready":"False"
	I0819 12:23:17.732213  501046 pod_ready.go:103] pod "metrics-server-9975d5f86-zb7nt" in "kube-system" namespace has status "Ready":"False"
	I0819 12:23:19.732786  501046 pod_ready.go:103] pod "metrics-server-9975d5f86-zb7nt" in "kube-system" namespace has status "Ready":"False"
	I0819 12:23:20.430427  506775 pod_ready.go:103] pod "metrics-server-6867b74b74-pjh8s" in "kube-system" namespace has status "Ready":"False"
	I0819 12:23:22.927639  506775 pod_ready.go:103] pod "metrics-server-6867b74b74-pjh8s" in "kube-system" namespace has status "Ready":"False"
	I0819 12:23:22.232422  501046 pod_ready.go:103] pod "metrics-server-9975d5f86-zb7nt" in "kube-system" namespace has status "Ready":"False"
	I0819 12:23:24.732496  501046 pod_ready.go:103] pod "metrics-server-9975d5f86-zb7nt" in "kube-system" namespace has status "Ready":"False"
	I0819 12:23:25.428889  506775 pod_ready.go:103] pod "metrics-server-6867b74b74-pjh8s" in "kube-system" namespace has status "Ready":"False"
	I0819 12:23:27.431248  506775 pod_ready.go:103] pod "metrics-server-6867b74b74-pjh8s" in "kube-system" namespace has status "Ready":"False"
	I0819 12:23:26.733172  501046 pod_ready.go:103] pod "metrics-server-9975d5f86-zb7nt" in "kube-system" namespace has status "Ready":"False"
	I0819 12:23:28.734947  501046 pod_ready.go:103] pod "metrics-server-9975d5f86-zb7nt" in "kube-system" namespace has status "Ready":"False"
	I0819 12:23:29.927715  506775 pod_ready.go:103] pod "metrics-server-6867b74b74-pjh8s" in "kube-system" namespace has status "Ready":"False"
	I0819 12:23:32.427575  506775 pod_ready.go:103] pod "metrics-server-6867b74b74-pjh8s" in "kube-system" namespace has status "Ready":"False"
	I0819 12:23:30.751249  501046 pod_ready.go:103] pod "metrics-server-9975d5f86-zb7nt" in "kube-system" namespace has status "Ready":"False"
	I0819 12:23:33.233010  501046 pod_ready.go:103] pod "metrics-server-9975d5f86-zb7nt" in "kube-system" namespace has status "Ready":"False"
	I0819 12:23:34.428393  506775 pod_ready.go:103] pod "metrics-server-6867b74b74-pjh8s" in "kube-system" namespace has status "Ready":"False"
	I0819 12:23:36.429603  506775 pod_ready.go:103] pod "metrics-server-6867b74b74-pjh8s" in "kube-system" namespace has status "Ready":"False"
	I0819 12:23:38.927845  506775 pod_ready.go:103] pod "metrics-server-6867b74b74-pjh8s" in "kube-system" namespace has status "Ready":"False"
	I0819 12:23:35.233661  501046 pod_ready.go:103] pod "metrics-server-9975d5f86-zb7nt" in "kube-system" namespace has status "Ready":"False"
	I0819 12:23:37.733373  501046 pod_ready.go:103] pod "metrics-server-9975d5f86-zb7nt" in "kube-system" namespace has status "Ready":"False"
	I0819 12:23:40.928287  506775 pod_ready.go:103] pod "metrics-server-6867b74b74-pjh8s" in "kube-system" namespace has status "Ready":"False"
	I0819 12:23:43.427733  506775 pod_ready.go:103] pod "metrics-server-6867b74b74-pjh8s" in "kube-system" namespace has status "Ready":"False"
	I0819 12:23:40.231990  501046 pod_ready.go:103] pod "metrics-server-9975d5f86-zb7nt" in "kube-system" namespace has status "Ready":"False"
	I0819 12:23:42.234703  501046 pod_ready.go:103] pod "metrics-server-9975d5f86-zb7nt" in "kube-system" namespace has status "Ready":"False"
	I0819 12:23:44.732581  501046 pod_ready.go:103] pod "metrics-server-9975d5f86-zb7nt" in "kube-system" namespace has status "Ready":"False"
	I0819 12:23:45.429375  506775 pod_ready.go:103] pod "metrics-server-6867b74b74-pjh8s" in "kube-system" namespace has status "Ready":"False"
	I0819 12:23:47.927850  506775 pod_ready.go:103] pod "metrics-server-6867b74b74-pjh8s" in "kube-system" namespace has status "Ready":"False"
	I0819 12:23:46.732863  501046 pod_ready.go:103] pod "metrics-server-9975d5f86-zb7nt" in "kube-system" namespace has status "Ready":"False"
	I0819 12:23:49.232267  501046 pod_ready.go:103] pod "metrics-server-9975d5f86-zb7nt" in "kube-system" namespace has status "Ready":"False"
	I0819 12:23:49.928983  506775 pod_ready.go:103] pod "metrics-server-6867b74b74-pjh8s" in "kube-system" namespace has status "Ready":"False"
	I0819 12:23:52.431193  506775 pod_ready.go:103] pod "metrics-server-6867b74b74-pjh8s" in "kube-system" namespace has status "Ready":"False"
	I0819 12:23:51.232776  501046 pod_ready.go:103] pod "metrics-server-9975d5f86-zb7nt" in "kube-system" namespace has status "Ready":"False"
	I0819 12:23:53.732192  501046 pod_ready.go:103] pod "metrics-server-9975d5f86-zb7nt" in "kube-system" namespace has status "Ready":"False"
	I0819 12:23:54.928760  506775 pod_ready.go:103] pod "metrics-server-6867b74b74-pjh8s" in "kube-system" namespace has status "Ready":"False"
	I0819 12:23:57.428632  506775 pod_ready.go:103] pod "metrics-server-6867b74b74-pjh8s" in "kube-system" namespace has status "Ready":"False"
	I0819 12:23:55.732525  501046 pod_ready.go:103] pod "metrics-server-9975d5f86-zb7nt" in "kube-system" namespace has status "Ready":"False"
	I0819 12:23:58.233663  501046 pod_ready.go:103] pod "metrics-server-9975d5f86-zb7nt" in "kube-system" namespace has status "Ready":"False"
	I0819 12:23:59.928848  506775 pod_ready.go:103] pod "metrics-server-6867b74b74-pjh8s" in "kube-system" namespace has status "Ready":"False"
	I0819 12:24:01.928882  506775 pod_ready.go:103] pod "metrics-server-6867b74b74-pjh8s" in "kube-system" namespace has status "Ready":"False"
	I0819 12:24:00.276863  501046 pod_ready.go:103] pod "metrics-server-9975d5f86-zb7nt" in "kube-system" namespace has status "Ready":"False"
	I0819 12:24:02.731628  501046 pod_ready.go:103] pod "metrics-server-9975d5f86-zb7nt" in "kube-system" namespace has status "Ready":"False"
	I0819 12:24:04.733426  501046 pod_ready.go:103] pod "metrics-server-9975d5f86-zb7nt" in "kube-system" namespace has status "Ready":"False"
	I0819 12:24:04.433128  506775 pod_ready.go:103] pod "metrics-server-6867b74b74-pjh8s" in "kube-system" namespace has status "Ready":"False"
	I0819 12:24:06.927479  506775 pod_ready.go:103] pod "metrics-server-6867b74b74-pjh8s" in "kube-system" namespace has status "Ready":"False"
	I0819 12:24:08.928669  506775 pod_ready.go:103] pod "metrics-server-6867b74b74-pjh8s" in "kube-system" namespace has status "Ready":"False"
	I0819 12:24:07.232447  501046 pod_ready.go:103] pod "metrics-server-9975d5f86-zb7nt" in "kube-system" namespace has status "Ready":"False"
	I0819 12:24:09.233530  501046 pod_ready.go:103] pod "metrics-server-9975d5f86-zb7nt" in "kube-system" namespace has status "Ready":"False"
	I0819 12:24:11.427532  506775 pod_ready.go:103] pod "metrics-server-6867b74b74-pjh8s" in "kube-system" namespace has status "Ready":"False"
	I0819 12:24:13.427850  506775 pod_ready.go:103] pod "metrics-server-6867b74b74-pjh8s" in "kube-system" namespace has status "Ready":"False"
	I0819 12:24:11.732403  501046 pod_ready.go:103] pod "metrics-server-9975d5f86-zb7nt" in "kube-system" namespace has status "Ready":"False"
	I0819 12:24:14.232975  501046 pod_ready.go:103] pod "metrics-server-9975d5f86-zb7nt" in "kube-system" namespace has status "Ready":"False"
	I0819 12:24:15.928146  506775 pod_ready.go:103] pod "metrics-server-6867b74b74-pjh8s" in "kube-system" namespace has status "Ready":"False"
	I0819 12:24:18.427771  506775 pod_ready.go:103] pod "metrics-server-6867b74b74-pjh8s" in "kube-system" namespace has status "Ready":"False"
	I0819 12:24:16.732303  501046 pod_ready.go:103] pod "metrics-server-9975d5f86-zb7nt" in "kube-system" namespace has status "Ready":"False"
	I0819 12:24:18.732401  501046 pod_ready.go:103] pod "metrics-server-9975d5f86-zb7nt" in "kube-system" namespace has status "Ready":"False"
	I0819 12:24:20.430274  506775 pod_ready.go:103] pod "metrics-server-6867b74b74-pjh8s" in "kube-system" namespace has status "Ready":"False"
	I0819 12:24:22.430677  506775 pod_ready.go:103] pod "metrics-server-6867b74b74-pjh8s" in "kube-system" namespace has status "Ready":"False"
	I0819 12:24:20.741326  501046 pod_ready.go:103] pod "metrics-server-9975d5f86-zb7nt" in "kube-system" namespace has status "Ready":"False"
	I0819 12:24:23.233107  501046 pod_ready.go:103] pod "metrics-server-9975d5f86-zb7nt" in "kube-system" namespace has status "Ready":"False"
	I0819 12:24:24.432222  506775 pod_ready.go:103] pod "metrics-server-6867b74b74-pjh8s" in "kube-system" namespace has status "Ready":"False"
	I0819 12:24:26.928404  506775 pod_ready.go:103] pod "metrics-server-6867b74b74-pjh8s" in "kube-system" namespace has status "Ready":"False"
	I0819 12:24:25.732223  501046 pod_ready.go:103] pod "metrics-server-9975d5f86-zb7nt" in "kube-system" namespace has status "Ready":"False"
	I0819 12:24:28.232138  501046 pod_ready.go:103] pod "metrics-server-9975d5f86-zb7nt" in "kube-system" namespace has status "Ready":"False"
	I0819 12:24:29.427912  506775 pod_ready.go:103] pod "metrics-server-6867b74b74-pjh8s" in "kube-system" namespace has status "Ready":"False"
	I0819 12:24:31.429021  506775 pod_ready.go:103] pod "metrics-server-6867b74b74-pjh8s" in "kube-system" namespace has status "Ready":"False"
	I0819 12:24:33.928208  506775 pod_ready.go:103] pod "metrics-server-6867b74b74-pjh8s" in "kube-system" namespace has status "Ready":"False"
	I0819 12:24:30.233480  501046 pod_ready.go:103] pod "metrics-server-9975d5f86-zb7nt" in "kube-system" namespace has status "Ready":"False"
	I0819 12:24:32.732556  501046 pod_ready.go:103] pod "metrics-server-9975d5f86-zb7nt" in "kube-system" namespace has status "Ready":"False"
	I0819 12:24:36.429574  506775 pod_ready.go:103] pod "metrics-server-6867b74b74-pjh8s" in "kube-system" namespace has status "Ready":"False"
	I0819 12:24:38.927572  506775 pod_ready.go:103] pod "metrics-server-6867b74b74-pjh8s" in "kube-system" namespace has status "Ready":"False"
	I0819 12:24:35.233051  501046 pod_ready.go:103] pod "metrics-server-9975d5f86-zb7nt" in "kube-system" namespace has status "Ready":"False"
	I0819 12:24:37.732495  501046 pod_ready.go:103] pod "metrics-server-9975d5f86-zb7nt" in "kube-system" namespace has status "Ready":"False"
	I0819 12:24:39.732707  501046 pod_ready.go:103] pod "metrics-server-9975d5f86-zb7nt" in "kube-system" namespace has status "Ready":"False"
	I0819 12:24:40.927948  506775 pod_ready.go:103] pod "metrics-server-6867b74b74-pjh8s" in "kube-system" namespace has status "Ready":"False"
	I0819 12:24:42.928931  506775 pod_ready.go:103] pod "metrics-server-6867b74b74-pjh8s" in "kube-system" namespace has status "Ready":"False"
	I0819 12:24:42.235719  501046 pod_ready.go:103] pod "metrics-server-9975d5f86-zb7nt" in "kube-system" namespace has status "Ready":"False"
	I0819 12:24:44.732856  501046 pod_ready.go:103] pod "metrics-server-9975d5f86-zb7nt" in "kube-system" namespace has status "Ready":"False"
	I0819 12:24:45.428728  506775 pod_ready.go:103] pod "metrics-server-6867b74b74-pjh8s" in "kube-system" namespace has status "Ready":"False"
	I0819 12:24:47.927864  506775 pod_ready.go:103] pod "metrics-server-6867b74b74-pjh8s" in "kube-system" namespace has status "Ready":"False"
	I0819 12:24:46.733493  501046 pod_ready.go:103] pod "metrics-server-9975d5f86-zb7nt" in "kube-system" namespace has status "Ready":"False"
	I0819 12:24:49.232408  501046 pod_ready.go:103] pod "metrics-server-9975d5f86-zb7nt" in "kube-system" namespace has status "Ready":"False"
	I0819 12:24:49.929401  506775 pod_ready.go:103] pod "metrics-server-6867b74b74-pjh8s" in "kube-system" namespace has status "Ready":"False"
	I0819 12:24:52.428250  506775 pod_ready.go:103] pod "metrics-server-6867b74b74-pjh8s" in "kube-system" namespace has status "Ready":"False"
	I0819 12:24:51.733161  501046 pod_ready.go:103] pod "metrics-server-9975d5f86-zb7nt" in "kube-system" namespace has status "Ready":"False"
	I0819 12:24:53.733703  501046 pod_ready.go:103] pod "metrics-server-9975d5f86-zb7nt" in "kube-system" namespace has status "Ready":"False"
	I0819 12:24:54.929432  506775 pod_ready.go:103] pod "metrics-server-6867b74b74-pjh8s" in "kube-system" namespace has status "Ready":"False"
	I0819 12:24:57.428053  506775 pod_ready.go:103] pod "metrics-server-6867b74b74-pjh8s" in "kube-system" namespace has status "Ready":"False"
	I0819 12:24:56.232830  501046 pod_ready.go:103] pod "metrics-server-9975d5f86-zb7nt" in "kube-system" namespace has status "Ready":"False"
	I0819 12:24:58.732195  501046 pod_ready.go:103] pod "metrics-server-9975d5f86-zb7nt" in "kube-system" namespace has status "Ready":"False"
	I0819 12:24:59.428118  506775 pod_ready.go:103] pod "metrics-server-6867b74b74-pjh8s" in "kube-system" namespace has status "Ready":"False"
	I0819 12:25:01.428532  506775 pod_ready.go:103] pod "metrics-server-6867b74b74-pjh8s" in "kube-system" namespace has status "Ready":"False"
	I0819 12:25:03.927195  506775 pod_ready.go:103] pod "metrics-server-6867b74b74-pjh8s" in "kube-system" namespace has status "Ready":"False"
	I0819 12:25:00.733771  501046 pod_ready.go:103] pod "metrics-server-9975d5f86-zb7nt" in "kube-system" namespace has status "Ready":"False"
	I0819 12:25:03.232645  501046 pod_ready.go:103] pod "metrics-server-9975d5f86-zb7nt" in "kube-system" namespace has status "Ready":"False"
	I0819 12:25:05.927978  506775 pod_ready.go:103] pod "metrics-server-6867b74b74-pjh8s" in "kube-system" namespace has status "Ready":"False"
	I0819 12:25:07.928174  506775 pod_ready.go:103] pod "metrics-server-6867b74b74-pjh8s" in "kube-system" namespace has status "Ready":"False"
	I0819 12:25:05.737116  501046 pod_ready.go:103] pod "metrics-server-9975d5f86-zb7nt" in "kube-system" namespace has status "Ready":"False"
	I0819 12:25:08.232082  501046 pod_ready.go:103] pod "metrics-server-9975d5f86-zb7nt" in "kube-system" namespace has status "Ready":"False"
	I0819 12:25:10.430775  506775 pod_ready.go:103] pod "metrics-server-6867b74b74-pjh8s" in "kube-system" namespace has status "Ready":"False"
	I0819 12:25:12.928302  506775 pod_ready.go:103] pod "metrics-server-6867b74b74-pjh8s" in "kube-system" namespace has status "Ready":"False"
	I0819 12:25:10.233206  501046 pod_ready.go:103] pod "metrics-server-9975d5f86-zb7nt" in "kube-system" namespace has status "Ready":"False"
	I0819 12:25:12.732250  501046 pod_ready.go:103] pod "metrics-server-9975d5f86-zb7nt" in "kube-system" namespace has status "Ready":"False"
	I0819 12:25:13.231680  501046 pod_ready.go:82] duration metric: took 4m0.005819776s for pod "metrics-server-9975d5f86-zb7nt" in "kube-system" namespace to be "Ready" ...
	E0819 12:25:13.231709  501046 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I0819 12:25:13.231721  501046 pod_ready.go:39] duration metric: took 5m24.266315688s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0819 12:25:13.231736  501046 api_server.go:52] waiting for apiserver process to appear ...
	I0819 12:25:13.231764  501046 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0819 12:25:13.231827  501046 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 12:25:13.278233  501046 cri.go:89] found id: "4ce61d87754c1a74c50315141a3956f04c053f6e9bf8ed92eb2f1d41f61bac61"
	I0819 12:25:13.278259  501046 cri.go:89] found id: "448906379c25acacfbe73890ef79d2faf13a76a8f18880099fda6187c53b0152"
	I0819 12:25:13.278275  501046 cri.go:89] found id: ""
	I0819 12:25:13.278283  501046 logs.go:276] 2 containers: [4ce61d87754c1a74c50315141a3956f04c053f6e9bf8ed92eb2f1d41f61bac61 448906379c25acacfbe73890ef79d2faf13a76a8f18880099fda6187c53b0152]
	I0819 12:25:13.278350  501046 ssh_runner.go:195] Run: which crictl
	I0819 12:25:13.282134  501046 ssh_runner.go:195] Run: which crictl
	I0819 12:25:13.285650  501046 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0819 12:25:13.285720  501046 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 12:25:13.336849  501046 cri.go:89] found id: "f2b278acf70fb649b3b13d726b38e5b951b7950e84be6176979a50e06c284635"
	I0819 12:25:13.336869  501046 cri.go:89] found id: "f96bb26d0d9fd54167ea4eedbda32851e7e6ed986c5d18edddbfb9d015c80aa8"
	I0819 12:25:13.336874  501046 cri.go:89] found id: ""
	I0819 12:25:13.336882  501046 logs.go:276] 2 containers: [f2b278acf70fb649b3b13d726b38e5b951b7950e84be6176979a50e06c284635 f96bb26d0d9fd54167ea4eedbda32851e7e6ed986c5d18edddbfb9d015c80aa8]
	I0819 12:25:13.336938  501046 ssh_runner.go:195] Run: which crictl
	I0819 12:25:13.343448  501046 ssh_runner.go:195] Run: which crictl
	I0819 12:25:13.354543  501046 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0819 12:25:13.354614  501046 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 12:25:13.396355  501046 cri.go:89] found id: "a9d449177d2f20bb4e0279df0064dc677dffa2194c0bfd6deb8af6688e466210"
	I0819 12:25:13.396380  501046 cri.go:89] found id: "52772681d7f9b14520c48d8c42d715a82c430226dad2db97bca20cde5180797d"
	I0819 12:25:13.396386  501046 cri.go:89] found id: ""
	I0819 12:25:13.396395  501046 logs.go:276] 2 containers: [a9d449177d2f20bb4e0279df0064dc677dffa2194c0bfd6deb8af6688e466210 52772681d7f9b14520c48d8c42d715a82c430226dad2db97bca20cde5180797d]
	I0819 12:25:13.396504  501046 ssh_runner.go:195] Run: which crictl
	I0819 12:25:13.400677  501046 ssh_runner.go:195] Run: which crictl
	I0819 12:25:13.404578  501046 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0819 12:25:13.404709  501046 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 12:25:13.452898  501046 cri.go:89] found id: "309ceea1b6362201d696baaee9f77608d461fce74f091280d398876fad125712"
	I0819 12:25:13.452919  501046 cri.go:89] found id: "1ef419f5f067970c35ce55d03d0e7a36fdebdd452881d8f478b7e537af217a32"
	I0819 12:25:13.452924  501046 cri.go:89] found id: ""
	I0819 12:25:13.452931  501046 logs.go:276] 2 containers: [309ceea1b6362201d696baaee9f77608d461fce74f091280d398876fad125712 1ef419f5f067970c35ce55d03d0e7a36fdebdd452881d8f478b7e537af217a32]
	I0819 12:25:13.452991  501046 ssh_runner.go:195] Run: which crictl
	I0819 12:25:13.457423  501046 ssh_runner.go:195] Run: which crictl
	I0819 12:25:13.461432  501046 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0819 12:25:13.461554  501046 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 12:25:13.522946  501046 cri.go:89] found id: "8581310ffb6da62d730c00416e9d418c1fd194d0459e551c98e90cd0193dc9be"
	I0819 12:25:13.522971  501046 cri.go:89] found id: "495863fa417577517a6659a9363e132d473fde25375c55ba292884732c5b5cc6"
	I0819 12:25:13.522976  501046 cri.go:89] found id: ""
	I0819 12:25:13.522983  501046 logs.go:276] 2 containers: [8581310ffb6da62d730c00416e9d418c1fd194d0459e551c98e90cd0193dc9be 495863fa417577517a6659a9363e132d473fde25375c55ba292884732c5b5cc6]
	I0819 12:25:13.523075  501046 ssh_runner.go:195] Run: which crictl
	I0819 12:25:13.527003  501046 ssh_runner.go:195] Run: which crictl
	I0819 12:25:13.530703  501046 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 12:25:13.530803  501046 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 12:25:13.575754  501046 cri.go:89] found id: "ff22b0055b8eb8b604f52a3bf6c5df44fc6b0ff5546a72d4f2b20b85080af524"
	I0819 12:25:13.575778  501046 cri.go:89] found id: "b8c6ba6c65d67f62a14421151c3013537c37cfcf1bc0b08d90d27bda4241f566"
	I0819 12:25:13.575783  501046 cri.go:89] found id: ""
	I0819 12:25:13.575802  501046 logs.go:276] 2 containers: [ff22b0055b8eb8b604f52a3bf6c5df44fc6b0ff5546a72d4f2b20b85080af524 b8c6ba6c65d67f62a14421151c3013537c37cfcf1bc0b08d90d27bda4241f566]
	I0819 12:25:13.575883  501046 ssh_runner.go:195] Run: which crictl
	I0819 12:25:13.579970  501046 ssh_runner.go:195] Run: which crictl
	I0819 12:25:13.583748  501046 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0819 12:25:13.583830  501046 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 12:25:13.623809  501046 cri.go:89] found id: "312b3b2145bf1c233ccc80613e0c67129fa905992c3e22c9a71929f05341b98f"
	I0819 12:25:13.623844  501046 cri.go:89] found id: "ff792901aeab0bc28facb26abfa78879ae3a7f0e523d2f2d7a83d2138d80c10b"
	I0819 12:25:13.623848  501046 cri.go:89] found id: ""
	I0819 12:25:13.623856  501046 logs.go:276] 2 containers: [312b3b2145bf1c233ccc80613e0c67129fa905992c3e22c9a71929f05341b98f ff792901aeab0bc28facb26abfa78879ae3a7f0e523d2f2d7a83d2138d80c10b]
	I0819 12:25:13.623932  501046 ssh_runner.go:195] Run: which crictl
	I0819 12:25:13.627973  501046 ssh_runner.go:195] Run: which crictl
	I0819 12:25:13.632210  501046 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0819 12:25:13.632333  501046 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0819 12:25:13.692474  501046 cri.go:89] found id: "d6ba97b27a6fcb89c2d05f135c98e06be837786617de7736914f6711ab33c929"
	I0819 12:25:13.692537  501046 cri.go:89] found id: "7a63c07299e71c1920b148c4c2cd68ce0fc64d5359eb08ebc374e073275d266b"
	I0819 12:25:13.692549  501046 cri.go:89] found id: ""
	I0819 12:25:13.692557  501046 logs.go:276] 2 containers: [d6ba97b27a6fcb89c2d05f135c98e06be837786617de7736914f6711ab33c929 7a63c07299e71c1920b148c4c2cd68ce0fc64d5359eb08ebc374e073275d266b]
	I0819 12:25:13.692619  501046 ssh_runner.go:195] Run: which crictl
	I0819 12:25:13.697385  501046 ssh_runner.go:195] Run: which crictl
	I0819 12:25:13.701826  501046 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 12:25:13.701927  501046 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 12:25:13.745714  501046 cri.go:89] found id: "e333f18f594f1e8bfc8059d2d10fca8e2977c7d6931e0738413013bc0a844e60"
	I0819 12:25:13.745787  501046 cri.go:89] found id: ""
	I0819 12:25:13.745800  501046 logs.go:276] 1 containers: [e333f18f594f1e8bfc8059d2d10fca8e2977c7d6931e0738413013bc0a844e60]
	I0819 12:25:13.745875  501046 ssh_runner.go:195] Run: which crictl
	I0819 12:25:13.749858  501046 logs.go:123] Gathering logs for kube-apiserver [448906379c25acacfbe73890ef79d2faf13a76a8f18880099fda6187c53b0152] ...
	I0819 12:25:13.749885  501046 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 448906379c25acacfbe73890ef79d2faf13a76a8f18880099fda6187c53b0152"
	I0819 12:25:13.815438  501046 logs.go:123] Gathering logs for etcd [f2b278acf70fb649b3b13d726b38e5b951b7950e84be6176979a50e06c284635] ...
	I0819 12:25:13.815473  501046 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f2b278acf70fb649b3b13d726b38e5b951b7950e84be6176979a50e06c284635"
	I0819 12:25:13.860833  501046 logs.go:123] Gathering logs for etcd [f96bb26d0d9fd54167ea4eedbda32851e7e6ed986c5d18edddbfb9d015c80aa8] ...
	I0819 12:25:13.860871  501046 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f96bb26d0d9fd54167ea4eedbda32851e7e6ed986c5d18edddbfb9d015c80aa8"
	I0819 12:25:13.901919  501046 logs.go:123] Gathering logs for kube-scheduler [309ceea1b6362201d696baaee9f77608d461fce74f091280d398876fad125712] ...
	I0819 12:25:13.901947  501046 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 309ceea1b6362201d696baaee9f77608d461fce74f091280d398876fad125712"
	I0819 12:25:13.948288  501046 logs.go:123] Gathering logs for storage-provisioner [d6ba97b27a6fcb89c2d05f135c98e06be837786617de7736914f6711ab33c929] ...
	I0819 12:25:13.948323  501046 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d6ba97b27a6fcb89c2d05f135c98e06be837786617de7736914f6711ab33c929"
	I0819 12:25:13.991333  501046 logs.go:123] Gathering logs for kube-apiserver [4ce61d87754c1a74c50315141a3956f04c053f6e9bf8ed92eb2f1d41f61bac61] ...
	I0819 12:25:13.991362  501046 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4ce61d87754c1a74c50315141a3956f04c053f6e9bf8ed92eb2f1d41f61bac61"
	I0819 12:25:14.058020  501046 logs.go:123] Gathering logs for kube-proxy [8581310ffb6da62d730c00416e9d418c1fd194d0459e551c98e90cd0193dc9be] ...
	I0819 12:25:14.058057  501046 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8581310ffb6da62d730c00416e9d418c1fd194d0459e551c98e90cd0193dc9be"
	I0819 12:25:14.101288  501046 logs.go:123] Gathering logs for container status ...
	I0819 12:25:14.101321  501046 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 12:25:14.146833  501046 logs.go:123] Gathering logs for kubelet ...
	I0819 12:25:14.146864  501046 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0819 12:25:14.201698  501046 logs.go:138] Found kubelet problem: Aug 19 12:19:48 old-k8s-version-091610 kubelet[673]: E0819 12:19:48.865111     673 reflector.go:138] object-"default"/"default-token-ddbn8": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "default-token-ddbn8" is forbidden: User "system:node:old-k8s-version-091610" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'old-k8s-version-091610' and this object
	W0819 12:25:14.201957  501046 logs.go:138] Found kubelet problem: Aug 19 12:19:48 old-k8s-version-091610 kubelet[673]: E0819 12:19:48.865470     673 reflector.go:138] object-"kube-system"/"coredns-token-24w5r": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "coredns-token-24w5r" is forbidden: User "system:node:old-k8s-version-091610" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-091610' and this object
	W0819 12:25:14.202184  501046 logs.go:138] Found kubelet problem: Aug 19 12:19:48 old-k8s-version-091610 kubelet[673]: E0819 12:19:48.865674     673 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:old-k8s-version-091610" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-091610' and this object
	W0819 12:25:14.202395  501046 logs.go:138] Found kubelet problem: Aug 19 12:19:48 old-k8s-version-091610 kubelet[673]: E0819 12:19:48.865769     673 reflector.go:138] object-"kube-system"/"kindnet-token-45phz": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kindnet-token-45phz" is forbidden: User "system:node:old-k8s-version-091610" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-091610' and this object
	W0819 12:25:14.202610  501046 logs.go:138] Found kubelet problem: Aug 19 12:19:48 old-k8s-version-091610 kubelet[673]: E0819 12:19:48.866431     673 reflector.go:138] object-"kube-system"/"kube-proxy-token-6m5lt": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kube-proxy-token-6m5lt" is forbidden: User "system:node:old-k8s-version-091610" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-091610' and this object
	W0819 12:25:14.202838  501046 logs.go:138] Found kubelet problem: Aug 19 12:19:48 old-k8s-version-091610 kubelet[673]: E0819 12:19:48.866509     673 reflector.go:138] object-"kube-system"/"storage-provisioner-token-lvtph": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "storage-provisioner-token-lvtph" is forbidden: User "system:node:old-k8s-version-091610" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-091610' and this object
	W0819 12:25:14.203068  501046 logs.go:138] Found kubelet problem: Aug 19 12:19:48 old-k8s-version-091610 kubelet[673]: E0819 12:19:48.866580     673 reflector.go:138] object-"kube-system"/"metrics-server-token-hgch9": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "metrics-server-token-hgch9" is forbidden: User "system:node:old-k8s-version-091610" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-091610' and this object
	W0819 12:25:14.203423  501046 logs.go:138] Found kubelet problem: Aug 19 12:19:48 old-k8s-version-091610 kubelet[673]: E0819 12:19:48.866601     673 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:old-k8s-version-091610" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-091610' and this object
	W0819 12:25:14.211248  501046 logs.go:138] Found kubelet problem: Aug 19 12:19:51 old-k8s-version-091610 kubelet[673]: E0819 12:19:51.439458     673 pod_workers.go:191] Error syncing pod 4085e2df-7e89-44a8-b234-c4b001bdff1d ("metrics-server-9975d5f86-zb7nt_kube-system(4085e2df-7e89-44a8-b234-c4b001bdff1d)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W0819 12:25:14.212820  501046 logs.go:138] Found kubelet problem: Aug 19 12:19:51 old-k8s-version-091610 kubelet[673]: E0819 12:19:51.960458     673 pod_workers.go:191] Error syncing pod 4085e2df-7e89-44a8-b234-c4b001bdff1d ("metrics-server-9975d5f86-zb7nt_kube-system(4085e2df-7e89-44a8-b234-c4b001bdff1d)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0819 12:25:14.215642  501046 logs.go:138] Found kubelet problem: Aug 19 12:20:04 old-k8s-version-091610 kubelet[673]: E0819 12:20:04.809492     673 pod_workers.go:191] Error syncing pod 4085e2df-7e89-44a8-b234-c4b001bdff1d ("metrics-server-9975d5f86-zb7nt_kube-system(4085e2df-7e89-44a8-b234-c4b001bdff1d)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W0819 12:25:14.217432  501046 logs.go:138] Found kubelet problem: Aug 19 12:20:15 old-k8s-version-091610 kubelet[673]: E0819 12:20:15.775810     673 pod_workers.go:191] Error syncing pod 4085e2df-7e89-44a8-b234-c4b001bdff1d ("metrics-server-9975d5f86-zb7nt_kube-system(4085e2df-7e89-44a8-b234-c4b001bdff1d)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0819 12:25:14.217895  501046 logs.go:138] Found kubelet problem: Aug 19 12:20:16 old-k8s-version-091610 kubelet[673]: E0819 12:20:16.146247     673 pod_workers.go:191] Error syncing pod 67d879d9-c2b3-4d91-8855-4f6007f01c6e ("dashboard-metrics-scraper-8d5bb5db8-kgs2g_kubernetes-dashboard(67d879d9-c2b3-4d91-8855-4f6007f01c6e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kgs2g_kubernetes-dashboard(67d879d9-c2b3-4d91-8855-4f6007f01c6e)"
	W0819 12:25:14.218235  501046 logs.go:138] Found kubelet problem: Aug 19 12:20:17 old-k8s-version-091610 kubelet[673]: E0819 12:20:17.149552     673 pod_workers.go:191] Error syncing pod 67d879d9-c2b3-4d91-8855-4f6007f01c6e ("dashboard-metrics-scraper-8d5bb5db8-kgs2g_kubernetes-dashboard(67d879d9-c2b3-4d91-8855-4f6007f01c6e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kgs2g_kubernetes-dashboard(67d879d9-c2b3-4d91-8855-4f6007f01c6e)"
	W0819 12:25:14.218564  501046 logs.go:138] Found kubelet problem: Aug 19 12:20:18 old-k8s-version-091610 kubelet[673]: E0819 12:20:18.503760     673 pod_workers.go:191] Error syncing pod 67d879d9-c2b3-4d91-8855-4f6007f01c6e ("dashboard-metrics-scraper-8d5bb5db8-kgs2g_kubernetes-dashboard(67d879d9-c2b3-4d91-8855-4f6007f01c6e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kgs2g_kubernetes-dashboard(67d879d9-c2b3-4d91-8855-4f6007f01c6e)"
	W0819 12:25:14.221343  501046 logs.go:138] Found kubelet problem: Aug 19 12:20:27 old-k8s-version-091610 kubelet[673]: E0819 12:20:27.783794     673 pod_workers.go:191] Error syncing pod 4085e2df-7e89-44a8-b234-c4b001bdff1d ("metrics-server-9975d5f86-zb7nt_kube-system(4085e2df-7e89-44a8-b234-c4b001bdff1d)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W0819 12:25:14.222272  501046 logs.go:138] Found kubelet problem: Aug 19 12:20:34 old-k8s-version-091610 kubelet[673]: E0819 12:20:34.224236     673 pod_workers.go:191] Error syncing pod 67d879d9-c2b3-4d91-8855-4f6007f01c6e ("dashboard-metrics-scraper-8d5bb5db8-kgs2g_kubernetes-dashboard(67d879d9-c2b3-4d91-8855-4f6007f01c6e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kgs2g_kubernetes-dashboard(67d879d9-c2b3-4d91-8855-4f6007f01c6e)"
	W0819 12:25:14.222598  501046 logs.go:138] Found kubelet problem: Aug 19 12:20:38 old-k8s-version-091610 kubelet[673]: E0819 12:20:38.503822     673 pod_workers.go:191] Error syncing pod 67d879d9-c2b3-4d91-8855-4f6007f01c6e ("dashboard-metrics-scraper-8d5bb5db8-kgs2g_kubernetes-dashboard(67d879d9-c2b3-4d91-8855-4f6007f01c6e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kgs2g_kubernetes-dashboard(67d879d9-c2b3-4d91-8855-4f6007f01c6e)"
	W0819 12:25:14.222781  501046 logs.go:138] Found kubelet problem: Aug 19 12:20:41 old-k8s-version-091610 kubelet[673]: E0819 12:20:41.779721     673 pod_workers.go:191] Error syncing pod 4085e2df-7e89-44a8-b234-c4b001bdff1d ("metrics-server-9975d5f86-zb7nt_kube-system(4085e2df-7e89-44a8-b234-c4b001bdff1d)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0819 12:25:14.223191  501046 logs.go:138] Found kubelet problem: Aug 19 12:20:49 old-k8s-version-091610 kubelet[673]: E0819 12:20:49.769540     673 pod_workers.go:191] Error syncing pod 67d879d9-c2b3-4d91-8855-4f6007f01c6e ("dashboard-metrics-scraper-8d5bb5db8-kgs2g_kubernetes-dashboard(67d879d9-c2b3-4d91-8855-4f6007f01c6e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kgs2g_kubernetes-dashboard(67d879d9-c2b3-4d91-8855-4f6007f01c6e)"
	W0819 12:25:14.223380  501046 logs.go:138] Found kubelet problem: Aug 19 12:20:53 old-k8s-version-091610 kubelet[673]: E0819 12:20:53.783516     673 pod_workers.go:191] Error syncing pod 4085e2df-7e89-44a8-b234-c4b001bdff1d ("metrics-server-9975d5f86-zb7nt_kube-system(4085e2df-7e89-44a8-b234-c4b001bdff1d)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0819 12:25:14.223970  501046 logs.go:138] Found kubelet problem: Aug 19 12:21:01 old-k8s-version-091610 kubelet[673]: E0819 12:21:01.379793     673 pod_workers.go:191] Error syncing pod 67d879d9-c2b3-4d91-8855-4f6007f01c6e ("dashboard-metrics-scraper-8d5bb5db8-kgs2g_kubernetes-dashboard(67d879d9-c2b3-4d91-8855-4f6007f01c6e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kgs2g_kubernetes-dashboard(67d879d9-c2b3-4d91-8855-4f6007f01c6e)"
	W0819 12:25:14.224153  501046 logs.go:138] Found kubelet problem: Aug 19 12:21:05 old-k8s-version-091610 kubelet[673]: E0819 12:21:05.770046     673 pod_workers.go:191] Error syncing pod 4085e2df-7e89-44a8-b234-c4b001bdff1d ("metrics-server-9975d5f86-zb7nt_kube-system(4085e2df-7e89-44a8-b234-c4b001bdff1d)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0819 12:25:14.224481  501046 logs.go:138] Found kubelet problem: Aug 19 12:21:08 old-k8s-version-091610 kubelet[673]: E0819 12:21:08.504303     673 pod_workers.go:191] Error syncing pod 67d879d9-c2b3-4d91-8855-4f6007f01c6e ("dashboard-metrics-scraper-8d5bb5db8-kgs2g_kubernetes-dashboard(67d879d9-c2b3-4d91-8855-4f6007f01c6e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kgs2g_kubernetes-dashboard(67d879d9-c2b3-4d91-8855-4f6007f01c6e)"
	W0819 12:25:14.226909  501046 logs.go:138] Found kubelet problem: Aug 19 12:21:19 old-k8s-version-091610 kubelet[673]: E0819 12:21:19.779165     673 pod_workers.go:191] Error syncing pod 4085e2df-7e89-44a8-b234-c4b001bdff1d ("metrics-server-9975d5f86-zb7nt_kube-system(4085e2df-7e89-44a8-b234-c4b001bdff1d)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W0819 12:25:14.227248  501046 logs.go:138] Found kubelet problem: Aug 19 12:21:22 old-k8s-version-091610 kubelet[673]: E0819 12:21:22.771924     673 pod_workers.go:191] Error syncing pod 67d879d9-c2b3-4d91-8855-4f6007f01c6e ("dashboard-metrics-scraper-8d5bb5db8-kgs2g_kubernetes-dashboard(67d879d9-c2b3-4d91-8855-4f6007f01c6e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kgs2g_kubernetes-dashboard(67d879d9-c2b3-4d91-8855-4f6007f01c6e)"
	W0819 12:25:14.227440  501046 logs.go:138] Found kubelet problem: Aug 19 12:21:31 old-k8s-version-091610 kubelet[673]: E0819 12:21:31.769891     673 pod_workers.go:191] Error syncing pod 4085e2df-7e89-44a8-b234-c4b001bdff1d ("metrics-server-9975d5f86-zb7nt_kube-system(4085e2df-7e89-44a8-b234-c4b001bdff1d)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0819 12:25:14.227798  501046 logs.go:138] Found kubelet problem: Aug 19 12:21:36 old-k8s-version-091610 kubelet[673]: E0819 12:21:36.770025     673 pod_workers.go:191] Error syncing pod 67d879d9-c2b3-4d91-8855-4f6007f01c6e ("dashboard-metrics-scraper-8d5bb5db8-kgs2g_kubernetes-dashboard(67d879d9-c2b3-4d91-8855-4f6007f01c6e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kgs2g_kubernetes-dashboard(67d879d9-c2b3-4d91-8855-4f6007f01c6e)"
	W0819 12:25:14.227984  501046 logs.go:138] Found kubelet problem: Aug 19 12:21:46 old-k8s-version-091610 kubelet[673]: E0819 12:21:46.779430     673 pod_workers.go:191] Error syncing pod 4085e2df-7e89-44a8-b234-c4b001bdff1d ("metrics-server-9975d5f86-zb7nt_kube-system(4085e2df-7e89-44a8-b234-c4b001bdff1d)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0819 12:25:14.228572  501046 logs.go:138] Found kubelet problem: Aug 19 12:21:48 old-k8s-version-091610 kubelet[673]: E0819 12:21:48.496386     673 pod_workers.go:191] Error syncing pod 67d879d9-c2b3-4d91-8855-4f6007f01c6e ("dashboard-metrics-scraper-8d5bb5db8-kgs2g_kubernetes-dashboard(67d879d9-c2b3-4d91-8855-4f6007f01c6e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kgs2g_kubernetes-dashboard(67d879d9-c2b3-4d91-8855-4f6007f01c6e)"
	W0819 12:25:14.228896  501046 logs.go:138] Found kubelet problem: Aug 19 12:21:49 old-k8s-version-091610 kubelet[673]: E0819 12:21:49.499775     673 pod_workers.go:191] Error syncing pod 67d879d9-c2b3-4d91-8855-4f6007f01c6e ("dashboard-metrics-scraper-8d5bb5db8-kgs2g_kubernetes-dashboard(67d879d9-c2b3-4d91-8855-4f6007f01c6e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kgs2g_kubernetes-dashboard(67d879d9-c2b3-4d91-8855-4f6007f01c6e)"
	W0819 12:25:14.229081  501046 logs.go:138] Found kubelet problem: Aug 19 12:21:59 old-k8s-version-091610 kubelet[673]: E0819 12:21:59.770059     673 pod_workers.go:191] Error syncing pod 4085e2df-7e89-44a8-b234-c4b001bdff1d ("metrics-server-9975d5f86-zb7nt_kube-system(4085e2df-7e89-44a8-b234-c4b001bdff1d)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0819 12:25:14.229409  501046 logs.go:138] Found kubelet problem: Aug 19 12:22:01 old-k8s-version-091610 kubelet[673]: E0819 12:22:01.769614     673 pod_workers.go:191] Error syncing pod 67d879d9-c2b3-4d91-8855-4f6007f01c6e ("dashboard-metrics-scraper-8d5bb5db8-kgs2g_kubernetes-dashboard(67d879d9-c2b3-4d91-8855-4f6007f01c6e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kgs2g_kubernetes-dashboard(67d879d9-c2b3-4d91-8855-4f6007f01c6e)"
	W0819 12:25:14.229593  501046 logs.go:138] Found kubelet problem: Aug 19 12:22:11 old-k8s-version-091610 kubelet[673]: E0819 12:22:11.770374     673 pod_workers.go:191] Error syncing pod 4085e2df-7e89-44a8-b234-c4b001bdff1d ("metrics-server-9975d5f86-zb7nt_kube-system(4085e2df-7e89-44a8-b234-c4b001bdff1d)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0819 12:25:14.229921  501046 logs.go:138] Found kubelet problem: Aug 19 12:22:12 old-k8s-version-091610 kubelet[673]: E0819 12:22:12.769816     673 pod_workers.go:191] Error syncing pod 67d879d9-c2b3-4d91-8855-4f6007f01c6e ("dashboard-metrics-scraper-8d5bb5db8-kgs2g_kubernetes-dashboard(67d879d9-c2b3-4d91-8855-4f6007f01c6e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kgs2g_kubernetes-dashboard(67d879d9-c2b3-4d91-8855-4f6007f01c6e)"
	W0819 12:25:14.230251  501046 logs.go:138] Found kubelet problem: Aug 19 12:22:23 old-k8s-version-091610 kubelet[673]: E0819 12:22:23.769634     673 pod_workers.go:191] Error syncing pod 67d879d9-c2b3-4d91-8855-4f6007f01c6e ("dashboard-metrics-scraper-8d5bb5db8-kgs2g_kubernetes-dashboard(67d879d9-c2b3-4d91-8855-4f6007f01c6e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kgs2g_kubernetes-dashboard(67d879d9-c2b3-4d91-8855-4f6007f01c6e)"
	W0819 12:25:14.230434  501046 logs.go:138] Found kubelet problem: Aug 19 12:22:25 old-k8s-version-091610 kubelet[673]: E0819 12:22:25.770010     673 pod_workers.go:191] Error syncing pod 4085e2df-7e89-44a8-b234-c4b001bdff1d ("metrics-server-9975d5f86-zb7nt_kube-system(4085e2df-7e89-44a8-b234-c4b001bdff1d)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0819 12:25:14.230619  501046 logs.go:138] Found kubelet problem: Aug 19 12:22:36 old-k8s-version-091610 kubelet[673]: E0819 12:22:36.771414     673 pod_workers.go:191] Error syncing pod 4085e2df-7e89-44a8-b234-c4b001bdff1d ("metrics-server-9975d5f86-zb7nt_kube-system(4085e2df-7e89-44a8-b234-c4b001bdff1d)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0819 12:25:14.230951  501046 logs.go:138] Found kubelet problem: Aug 19 12:22:38 old-k8s-version-091610 kubelet[673]: E0819 12:22:38.769996     673 pod_workers.go:191] Error syncing pod 67d879d9-c2b3-4d91-8855-4f6007f01c6e ("dashboard-metrics-scraper-8d5bb5db8-kgs2g_kubernetes-dashboard(67d879d9-c2b3-4d91-8855-4f6007f01c6e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kgs2g_kubernetes-dashboard(67d879d9-c2b3-4d91-8855-4f6007f01c6e)"
	W0819 12:25:14.233455  501046 logs.go:138] Found kubelet problem: Aug 19 12:22:48 old-k8s-version-091610 kubelet[673]: E0819 12:22:48.780860     673 pod_workers.go:191] Error syncing pod 4085e2df-7e89-44a8-b234-c4b001bdff1d ("metrics-server-9975d5f86-zb7nt_kube-system(4085e2df-7e89-44a8-b234-c4b001bdff1d)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W0819 12:25:14.233794  501046 logs.go:138] Found kubelet problem: Aug 19 12:22:52 old-k8s-version-091610 kubelet[673]: E0819 12:22:52.770145     673 pod_workers.go:191] Error syncing pod 67d879d9-c2b3-4d91-8855-4f6007f01c6e ("dashboard-metrics-scraper-8d5bb5db8-kgs2g_kubernetes-dashboard(67d879d9-c2b3-4d91-8855-4f6007f01c6e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kgs2g_kubernetes-dashboard(67d879d9-c2b3-4d91-8855-4f6007f01c6e)"
	W0819 12:25:14.233987  501046 logs.go:138] Found kubelet problem: Aug 19 12:23:00 old-k8s-version-091610 kubelet[673]: E0819 12:23:00.770567     673 pod_workers.go:191] Error syncing pod 4085e2df-7e89-44a8-b234-c4b001bdff1d ("metrics-server-9975d5f86-zb7nt_kube-system(4085e2df-7e89-44a8-b234-c4b001bdff1d)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0819 12:25:14.234319  501046 logs.go:138] Found kubelet problem: Aug 19 12:23:07 old-k8s-version-091610 kubelet[673]: E0819 12:23:07.769578     673 pod_workers.go:191] Error syncing pod 67d879d9-c2b3-4d91-8855-4f6007f01c6e ("dashboard-metrics-scraper-8d5bb5db8-kgs2g_kubernetes-dashboard(67d879d9-c2b3-4d91-8855-4f6007f01c6e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kgs2g_kubernetes-dashboard(67d879d9-c2b3-4d91-8855-4f6007f01c6e)"
	W0819 12:25:14.234504  501046 logs.go:138] Found kubelet problem: Aug 19 12:23:12 old-k8s-version-091610 kubelet[673]: E0819 12:23:12.772710     673 pod_workers.go:191] Error syncing pod 4085e2df-7e89-44a8-b234-c4b001bdff1d ("metrics-server-9975d5f86-zb7nt_kube-system(4085e2df-7e89-44a8-b234-c4b001bdff1d)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0819 12:25:14.235096  501046 logs.go:138] Found kubelet problem: Aug 19 12:23:22 old-k8s-version-091610 kubelet[673]: E0819 12:23:22.733720     673 pod_workers.go:191] Error syncing pod 67d879d9-c2b3-4d91-8855-4f6007f01c6e ("dashboard-metrics-scraper-8d5bb5db8-kgs2g_kubernetes-dashboard(67d879d9-c2b3-4d91-8855-4f6007f01c6e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kgs2g_kubernetes-dashboard(67d879d9-c2b3-4d91-8855-4f6007f01c6e)"
	W0819 12:25:14.235282  501046 logs.go:138] Found kubelet problem: Aug 19 12:23:26 old-k8s-version-091610 kubelet[673]: E0819 12:23:26.770120     673 pod_workers.go:191] Error syncing pod 4085e2df-7e89-44a8-b234-c4b001bdff1d ("metrics-server-9975d5f86-zb7nt_kube-system(4085e2df-7e89-44a8-b234-c4b001bdff1d)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0819 12:25:14.235611  501046 logs.go:138] Found kubelet problem: Aug 19 12:23:28 old-k8s-version-091610 kubelet[673]: E0819 12:23:28.503831     673 pod_workers.go:191] Error syncing pod 67d879d9-c2b3-4d91-8855-4f6007f01c6e ("dashboard-metrics-scraper-8d5bb5db8-kgs2g_kubernetes-dashboard(67d879d9-c2b3-4d91-8855-4f6007f01c6e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kgs2g_kubernetes-dashboard(67d879d9-c2b3-4d91-8855-4f6007f01c6e)"
	W0819 12:25:14.235799  501046 logs.go:138] Found kubelet problem: Aug 19 12:23:38 old-k8s-version-091610 kubelet[673]: E0819 12:23:38.770038     673 pod_workers.go:191] Error syncing pod 4085e2df-7e89-44a8-b234-c4b001bdff1d ("metrics-server-9975d5f86-zb7nt_kube-system(4085e2df-7e89-44a8-b234-c4b001bdff1d)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0819 12:25:14.236126  501046 logs.go:138] Found kubelet problem: Aug 19 12:23:42 old-k8s-version-091610 kubelet[673]: E0819 12:23:42.769697     673 pod_workers.go:191] Error syncing pod 67d879d9-c2b3-4d91-8855-4f6007f01c6e ("dashboard-metrics-scraper-8d5bb5db8-kgs2g_kubernetes-dashboard(67d879d9-c2b3-4d91-8855-4f6007f01c6e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kgs2g_kubernetes-dashboard(67d879d9-c2b3-4d91-8855-4f6007f01c6e)"
	W0819 12:25:14.236311  501046 logs.go:138] Found kubelet problem: Aug 19 12:23:50 old-k8s-version-091610 kubelet[673]: E0819 12:23:50.774063     673 pod_workers.go:191] Error syncing pod 4085e2df-7e89-44a8-b234-c4b001bdff1d ("metrics-server-9975d5f86-zb7nt_kube-system(4085e2df-7e89-44a8-b234-c4b001bdff1d)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0819 12:25:14.236639  501046 logs.go:138] Found kubelet problem: Aug 19 12:23:57 old-k8s-version-091610 kubelet[673]: E0819 12:23:57.769849     673 pod_workers.go:191] Error syncing pod 67d879d9-c2b3-4d91-8855-4f6007f01c6e ("dashboard-metrics-scraper-8d5bb5db8-kgs2g_kubernetes-dashboard(67d879d9-c2b3-4d91-8855-4f6007f01c6e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kgs2g_kubernetes-dashboard(67d879d9-c2b3-4d91-8855-4f6007f01c6e)"
	W0819 12:25:14.236828  501046 logs.go:138] Found kubelet problem: Aug 19 12:24:03 old-k8s-version-091610 kubelet[673]: E0819 12:24:03.769943     673 pod_workers.go:191] Error syncing pod 4085e2df-7e89-44a8-b234-c4b001bdff1d ("metrics-server-9975d5f86-zb7nt_kube-system(4085e2df-7e89-44a8-b234-c4b001bdff1d)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0819 12:25:14.237153  501046 logs.go:138] Found kubelet problem: Aug 19 12:24:12 old-k8s-version-091610 kubelet[673]: E0819 12:24:12.770074     673 pod_workers.go:191] Error syncing pod 67d879d9-c2b3-4d91-8855-4f6007f01c6e ("dashboard-metrics-scraper-8d5bb5db8-kgs2g_kubernetes-dashboard(67d879d9-c2b3-4d91-8855-4f6007f01c6e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kgs2g_kubernetes-dashboard(67d879d9-c2b3-4d91-8855-4f6007f01c6e)"
	W0819 12:25:14.237337  501046 logs.go:138] Found kubelet problem: Aug 19 12:24:18 old-k8s-version-091610 kubelet[673]: E0819 12:24:18.770049     673 pod_workers.go:191] Error syncing pod 4085e2df-7e89-44a8-b234-c4b001bdff1d ("metrics-server-9975d5f86-zb7nt_kube-system(4085e2df-7e89-44a8-b234-c4b001bdff1d)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0819 12:25:14.237665  501046 logs.go:138] Found kubelet problem: Aug 19 12:24:27 old-k8s-version-091610 kubelet[673]: E0819 12:24:27.769551     673 pod_workers.go:191] Error syncing pod 67d879d9-c2b3-4d91-8855-4f6007f01c6e ("dashboard-metrics-scraper-8d5bb5db8-kgs2g_kubernetes-dashboard(67d879d9-c2b3-4d91-8855-4f6007f01c6e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kgs2g_kubernetes-dashboard(67d879d9-c2b3-4d91-8855-4f6007f01c6e)"
	W0819 12:25:14.237851  501046 logs.go:138] Found kubelet problem: Aug 19 12:24:32 old-k8s-version-091610 kubelet[673]: E0819 12:24:32.769991     673 pod_workers.go:191] Error syncing pod 4085e2df-7e89-44a8-b234-c4b001bdff1d ("metrics-server-9975d5f86-zb7nt_kube-system(4085e2df-7e89-44a8-b234-c4b001bdff1d)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0819 12:25:14.238178  501046 logs.go:138] Found kubelet problem: Aug 19 12:24:39 old-k8s-version-091610 kubelet[673]: E0819 12:24:39.769647     673 pod_workers.go:191] Error syncing pod 67d879d9-c2b3-4d91-8855-4f6007f01c6e ("dashboard-metrics-scraper-8d5bb5db8-kgs2g_kubernetes-dashboard(67d879d9-c2b3-4d91-8855-4f6007f01c6e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kgs2g_kubernetes-dashboard(67d879d9-c2b3-4d91-8855-4f6007f01c6e)"
	W0819 12:25:14.238362  501046 logs.go:138] Found kubelet problem: Aug 19 12:24:47 old-k8s-version-091610 kubelet[673]: E0819 12:24:47.769929     673 pod_workers.go:191] Error syncing pod 4085e2df-7e89-44a8-b234-c4b001bdff1d ("metrics-server-9975d5f86-zb7nt_kube-system(4085e2df-7e89-44a8-b234-c4b001bdff1d)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0819 12:25:14.238686  501046 logs.go:138] Found kubelet problem: Aug 19 12:24:52 old-k8s-version-091610 kubelet[673]: E0819 12:24:52.769598     673 pod_workers.go:191] Error syncing pod 67d879d9-c2b3-4d91-8855-4f6007f01c6e ("dashboard-metrics-scraper-8d5bb5db8-kgs2g_kubernetes-dashboard(67d879d9-c2b3-4d91-8855-4f6007f01c6e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kgs2g_kubernetes-dashboard(67d879d9-c2b3-4d91-8855-4f6007f01c6e)"
	W0819 12:25:14.238869  501046 logs.go:138] Found kubelet problem: Aug 19 12:25:02 old-k8s-version-091610 kubelet[673]: E0819 12:25:02.770111     673 pod_workers.go:191] Error syncing pod 4085e2df-7e89-44a8-b234-c4b001bdff1d ("metrics-server-9975d5f86-zb7nt_kube-system(4085e2df-7e89-44a8-b234-c4b001bdff1d)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0819 12:25:14.239200  501046 logs.go:138] Found kubelet problem: Aug 19 12:25:07 old-k8s-version-091610 kubelet[673]: E0819 12:25:07.769696     673 pod_workers.go:191] Error syncing pod 67d879d9-c2b3-4d91-8855-4f6007f01c6e ("dashboard-metrics-scraper-8d5bb5db8-kgs2g_kubernetes-dashboard(67d879d9-c2b3-4d91-8855-4f6007f01c6e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kgs2g_kubernetes-dashboard(67d879d9-c2b3-4d91-8855-4f6007f01c6e)"
	I0819 12:25:14.239212  501046 logs.go:123] Gathering logs for kube-scheduler [1ef419f5f067970c35ce55d03d0e7a36fdebdd452881d8f478b7e537af217a32] ...
	I0819 12:25:14.239227  501046 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1ef419f5f067970c35ce55d03d0e7a36fdebdd452881d8f478b7e537af217a32"
	I0819 12:25:14.281119  501046 logs.go:123] Gathering logs for kube-proxy [495863fa417577517a6659a9363e132d473fde25375c55ba292884732c5b5cc6] ...
	I0819 12:25:14.281158  501046 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 495863fa417577517a6659a9363e132d473fde25375c55ba292884732c5b5cc6"
	I0819 12:25:14.333451  501046 logs.go:123] Gathering logs for kindnet [312b3b2145bf1c233ccc80613e0c67129fa905992c3e22c9a71929f05341b98f] ...
	I0819 12:25:14.333481  501046 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 312b3b2145bf1c233ccc80613e0c67129fa905992c3e22c9a71929f05341b98f"
	I0819 12:25:14.397535  501046 logs.go:123] Gathering logs for kindnet [ff792901aeab0bc28facb26abfa78879ae3a7f0e523d2f2d7a83d2138d80c10b] ...
	I0819 12:25:14.397569  501046 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ff792901aeab0bc28facb26abfa78879ae3a7f0e523d2f2d7a83d2138d80c10b"
	I0819 12:25:14.462615  501046 logs.go:123] Gathering logs for kubernetes-dashboard [e333f18f594f1e8bfc8059d2d10fca8e2977c7d6931e0738413013bc0a844e60] ...
	I0819 12:25:14.462649  501046 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e333f18f594f1e8bfc8059d2d10fca8e2977c7d6931e0738413013bc0a844e60"
	I0819 12:25:14.509982  501046 logs.go:123] Gathering logs for storage-provisioner [7a63c07299e71c1920b148c4c2cd68ce0fc64d5359eb08ebc374e073275d266b] ...
	I0819 12:25:14.510040  501046 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7a63c07299e71c1920b148c4c2cd68ce0fc64d5359eb08ebc374e073275d266b"
	I0819 12:25:14.550272  501046 logs.go:123] Gathering logs for containerd ...
	I0819 12:25:14.550299  501046 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0819 12:25:14.612323  501046 logs.go:123] Gathering logs for dmesg ...
	I0819 12:25:14.612358  501046 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 12:25:14.632096  501046 logs.go:123] Gathering logs for describe nodes ...
	I0819 12:25:14.632172  501046 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 12:25:14.809269  501046 logs.go:123] Gathering logs for coredns [a9d449177d2f20bb4e0279df0064dc677dffa2194c0bfd6deb8af6688e466210] ...
	I0819 12:25:14.809356  501046 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a9d449177d2f20bb4e0279df0064dc677dffa2194c0bfd6deb8af6688e466210"
	I0819 12:25:14.866213  501046 logs.go:123] Gathering logs for coredns [52772681d7f9b14520c48d8c42d715a82c430226dad2db97bca20cde5180797d] ...
	I0819 12:25:14.866243  501046 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 52772681d7f9b14520c48d8c42d715a82c430226dad2db97bca20cde5180797d"
	I0819 12:25:14.907317  501046 logs.go:123] Gathering logs for kube-controller-manager [ff22b0055b8eb8b604f52a3bf6c5df44fc6b0ff5546a72d4f2b20b85080af524] ...
	I0819 12:25:14.907344  501046 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ff22b0055b8eb8b604f52a3bf6c5df44fc6b0ff5546a72d4f2b20b85080af524"
	I0819 12:25:14.970762  501046 logs.go:123] Gathering logs for kube-controller-manager [b8c6ba6c65d67f62a14421151c3013537c37cfcf1bc0b08d90d27bda4241f566] ...
	I0819 12:25:14.970797  501046 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b8c6ba6c65d67f62a14421151c3013537c37cfcf1bc0b08d90d27bda4241f566"
	I0819 12:25:14.928452  506775 pod_ready.go:103] pod "metrics-server-6867b74b74-pjh8s" in "kube-system" namespace has status "Ready":"False"
	I0819 12:25:17.428856  506775 pod_ready.go:103] pod "metrics-server-6867b74b74-pjh8s" in "kube-system" namespace has status "Ready":"False"
	I0819 12:25:15.044244  501046 out.go:358] Setting ErrFile to fd 2...
	I0819 12:25:15.045762  501046 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0819 12:25:15.045885  501046 out.go:270] X Problems detected in kubelet:
	W0819 12:25:15.056894  501046 out.go:270]   Aug 19 12:24:39 old-k8s-version-091610 kubelet[673]: E0819 12:24:39.769647     673 pod_workers.go:191] Error syncing pod 67d879d9-c2b3-4d91-8855-4f6007f01c6e ("dashboard-metrics-scraper-8d5bb5db8-kgs2g_kubernetes-dashboard(67d879d9-c2b3-4d91-8855-4f6007f01c6e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kgs2g_kubernetes-dashboard(67d879d9-c2b3-4d91-8855-4f6007f01c6e)"
	W0819 12:25:15.057167  501046 out.go:270]   Aug 19 12:24:47 old-k8s-version-091610 kubelet[673]: E0819 12:24:47.769929     673 pod_workers.go:191] Error syncing pod 4085e2df-7e89-44a8-b234-c4b001bdff1d ("metrics-server-9975d5f86-zb7nt_kube-system(4085e2df-7e89-44a8-b234-c4b001bdff1d)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0819 12:25:15.057177  501046 out.go:270]   Aug 19 12:24:52 old-k8s-version-091610 kubelet[673]: E0819 12:24:52.769598     673 pod_workers.go:191] Error syncing pod 67d879d9-c2b3-4d91-8855-4f6007f01c6e ("dashboard-metrics-scraper-8d5bb5db8-kgs2g_kubernetes-dashboard(67d879d9-c2b3-4d91-8855-4f6007f01c6e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kgs2g_kubernetes-dashboard(67d879d9-c2b3-4d91-8855-4f6007f01c6e)"
	W0819 12:25:15.057184  501046 out.go:270]   Aug 19 12:25:02 old-k8s-version-091610 kubelet[673]: E0819 12:25:02.770111     673 pod_workers.go:191] Error syncing pod 4085e2df-7e89-44a8-b234-c4b001bdff1d ("metrics-server-9975d5f86-zb7nt_kube-system(4085e2df-7e89-44a8-b234-c4b001bdff1d)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0819 12:25:15.057189  501046 out.go:270]   Aug 19 12:25:07 old-k8s-version-091610 kubelet[673]: E0819 12:25:07.769696     673 pod_workers.go:191] Error syncing pod 67d879d9-c2b3-4d91-8855-4f6007f01c6e ("dashboard-metrics-scraper-8d5bb5db8-kgs2g_kubernetes-dashboard(67d879d9-c2b3-4d91-8855-4f6007f01c6e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kgs2g_kubernetes-dashboard(67d879d9-c2b3-4d91-8855-4f6007f01c6e)"
	I0819 12:25:15.057204  501046 out.go:358] Setting ErrFile to fd 2...
	I0819 12:25:15.057276  501046 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 12:25:19.929213  506775 pod_ready.go:103] pod "metrics-server-6867b74b74-pjh8s" in "kube-system" namespace has status "Ready":"False"
	I0819 12:25:22.428165  506775 pod_ready.go:103] pod "metrics-server-6867b74b74-pjh8s" in "kube-system" namespace has status "Ready":"False"
	I0819 12:25:24.429050  506775 pod_ready.go:103] pod "metrics-server-6867b74b74-pjh8s" in "kube-system" namespace has status "Ready":"False"
	I0819 12:25:26.431433  506775 pod_ready.go:103] pod "metrics-server-6867b74b74-pjh8s" in "kube-system" namespace has status "Ready":"False"
	I0819 12:25:26.928920  506775 pod_ready.go:82] duration metric: took 4m0.007378933s for pod "metrics-server-6867b74b74-pjh8s" in "kube-system" namespace to be "Ready" ...
	E0819 12:25:26.928944  506775 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I0819 12:25:26.928954  506775 pod_ready.go:39] duration metric: took 4m0.61316845s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0819 12:25:26.928968  506775 api_server.go:52] waiting for apiserver process to appear ...
	I0819 12:25:26.929005  506775 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0819 12:25:26.929065  506775 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 12:25:26.975858  506775 cri.go:89] found id: "8bbf4db80884ca68121db5cd324106134f008d55ceebaa2d085ff9a78d6bc836"
	I0819 12:25:26.975883  506775 cri.go:89] found id: "3adc6cce17e637725838624966046f6786a930b68fc69c98675834705549597f"
	I0819 12:25:26.975887  506775 cri.go:89] found id: ""
	I0819 12:25:26.975895  506775 logs.go:276] 2 containers: [8bbf4db80884ca68121db5cd324106134f008d55ceebaa2d085ff9a78d6bc836 3adc6cce17e637725838624966046f6786a930b68fc69c98675834705549597f]
	I0819 12:25:26.975953  506775 ssh_runner.go:195] Run: which crictl
	I0819 12:25:26.980097  506775 ssh_runner.go:195] Run: which crictl
	I0819 12:25:26.983873  506775 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0819 12:25:26.983970  506775 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 12:25:27.036642  506775 cri.go:89] found id: "71b136bcd944e3e244b3617b908c4b8ce62ccc5992841a4ced5ad9956939118c"
	I0819 12:25:27.036667  506775 cri.go:89] found id: "1221ed0d57efc908de20b1a0bfc703ea95699d6db4f60b5d2d2fc087d37714bf"
	I0819 12:25:27.036673  506775 cri.go:89] found id: ""
	I0819 12:25:27.036680  506775 logs.go:276] 2 containers: [71b136bcd944e3e244b3617b908c4b8ce62ccc5992841a4ced5ad9956939118c 1221ed0d57efc908de20b1a0bfc703ea95699d6db4f60b5d2d2fc087d37714bf]
	I0819 12:25:27.036752  506775 ssh_runner.go:195] Run: which crictl
	I0819 12:25:27.040753  506775 ssh_runner.go:195] Run: which crictl
	I0819 12:25:27.044685  506775 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0819 12:25:27.044757  506775 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 12:25:27.098807  506775 cri.go:89] found id: "4175b9236449c47bc76b55d1b66e0a2f5eb6a494d652e0d61e523de3c052979a"
	I0819 12:25:27.098946  506775 cri.go:89] found id: "124128cbd1e5d2b757d30236510a2f7775b5f133017bdb53fc286fa7673de17c"
	I0819 12:25:27.098971  506775 cri.go:89] found id: ""
	I0819 12:25:27.098985  506775 logs.go:276] 2 containers: [4175b9236449c47bc76b55d1b66e0a2f5eb6a494d652e0d61e523de3c052979a 124128cbd1e5d2b757d30236510a2f7775b5f133017bdb53fc286fa7673de17c]
	I0819 12:25:27.099046  506775 ssh_runner.go:195] Run: which crictl
	I0819 12:25:27.103138  506775 ssh_runner.go:195] Run: which crictl
	I0819 12:25:27.107221  506775 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0819 12:25:27.107298  506775 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 12:25:27.150389  506775 cri.go:89] found id: "6baa26f8742a8686343596d2fa2f37869dfdf0f779921454b0f514549aad19c8"
	I0819 12:25:27.150426  506775 cri.go:89] found id: "3f438763d4391ba5c2045a523f101f3454bd109ac70f3dd614d83dc21b9100d7"
	I0819 12:25:27.150432  506775 cri.go:89] found id: ""
	I0819 12:25:27.150440  506775 logs.go:276] 2 containers: [6baa26f8742a8686343596d2fa2f37869dfdf0f779921454b0f514549aad19c8 3f438763d4391ba5c2045a523f101f3454bd109ac70f3dd614d83dc21b9100d7]
	I0819 12:25:27.150517  506775 ssh_runner.go:195] Run: which crictl
	I0819 12:25:27.155025  506775 ssh_runner.go:195] Run: which crictl
	I0819 12:25:27.158671  506775 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0819 12:25:27.158746  506775 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 12:25:27.205084  506775 cri.go:89] found id: "16543fc65a1be3182fec4ab65415b08c4149b41f875d21e7e70c4d3f9c5035e4"
	I0819 12:25:27.205111  506775 cri.go:89] found id: "0bb7433777c470db1fa11e0481317ae603e85048c1e3b61a8cb0df0509de2f96"
	I0819 12:25:27.205118  506775 cri.go:89] found id: ""
	I0819 12:25:27.205126  506775 logs.go:276] 2 containers: [16543fc65a1be3182fec4ab65415b08c4149b41f875d21e7e70c4d3f9c5035e4 0bb7433777c470db1fa11e0481317ae603e85048c1e3b61a8cb0df0509de2f96]
	I0819 12:25:27.205209  506775 ssh_runner.go:195] Run: which crictl
	I0819 12:25:27.210315  506775 ssh_runner.go:195] Run: which crictl
	I0819 12:25:27.214242  506775 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 12:25:27.214317  506775 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 12:25:27.255447  506775 cri.go:89] found id: "db269d4aaac2f76ebe4d954bbc95bc91cd0db32b72b8cc6710bfea0916a990c8"
	I0819 12:25:27.255468  506775 cri.go:89] found id: "80421881617fc159def3bb11f59a8c750e63de334013cda4ebb677a69409f401"
	I0819 12:25:27.255472  506775 cri.go:89] found id: ""
	I0819 12:25:27.255480  506775 logs.go:276] 2 containers: [db269d4aaac2f76ebe4d954bbc95bc91cd0db32b72b8cc6710bfea0916a990c8 80421881617fc159def3bb11f59a8c750e63de334013cda4ebb677a69409f401]
	I0819 12:25:27.255549  506775 ssh_runner.go:195] Run: which crictl
	I0819 12:25:27.260831  506775 ssh_runner.go:195] Run: which crictl
	I0819 12:25:27.264628  506775 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0819 12:25:27.264746  506775 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 12:25:27.308021  506775 cri.go:89] found id: "49333be0a29a6e8ff2271ac54c2ce23bcf06d3d198767a64a31ac123da942f2a"
	I0819 12:25:27.308044  506775 cri.go:89] found id: "597e793835a348134dae58cb28cbf8c7c5afb6b55ab77c57a898610b0036ab29"
	I0819 12:25:27.308052  506775 cri.go:89] found id: ""
	I0819 12:25:27.308059  506775 logs.go:276] 2 containers: [49333be0a29a6e8ff2271ac54c2ce23bcf06d3d198767a64a31ac123da942f2a 597e793835a348134dae58cb28cbf8c7c5afb6b55ab77c57a898610b0036ab29]
	I0819 12:25:27.308116  506775 ssh_runner.go:195] Run: which crictl
	I0819 12:25:27.311664  506775 ssh_runner.go:195] Run: which crictl
	I0819 12:25:27.315265  506775 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 12:25:27.315375  506775 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 12:25:27.371040  506775 cri.go:89] found id: "408674f5d24a3f0915a71e88dbefe58dbeadf401736b664e699d803b2bfad6e6"
	I0819 12:25:27.371106  506775 cri.go:89] found id: ""
	I0819 12:25:27.371127  506775 logs.go:276] 1 containers: [408674f5d24a3f0915a71e88dbefe58dbeadf401736b664e699d803b2bfad6e6]
	I0819 12:25:27.371211  506775 ssh_runner.go:195] Run: which crictl
	I0819 12:25:27.375734  506775 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0819 12:25:27.375844  506775 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0819 12:25:27.424494  506775 cri.go:89] found id: "cc0e89380ed7e7194ad6d7e8e1bb072d2ee34ed9e8621cc957f55e8a88be96d6"
	I0819 12:25:27.424558  506775 cri.go:89] found id: "c9b68086ed65f933fa0c022f76cfa10a03814df0ff78fc874b3e42cba636fc7b"
	I0819 12:25:27.424577  506775 cri.go:89] found id: ""
	I0819 12:25:27.424595  506775 logs.go:276] 2 containers: [cc0e89380ed7e7194ad6d7e8e1bb072d2ee34ed9e8621cc957f55e8a88be96d6 c9b68086ed65f933fa0c022f76cfa10a03814df0ff78fc874b3e42cba636fc7b]
	I0819 12:25:27.424677  506775 ssh_runner.go:195] Run: which crictl
	I0819 12:25:27.428492  506775 ssh_runner.go:195] Run: which crictl
	I0819 12:25:27.432425  506775 logs.go:123] Gathering logs for etcd [71b136bcd944e3e244b3617b908c4b8ce62ccc5992841a4ced5ad9956939118c] ...
	I0819 12:25:27.432453  506775 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 71b136bcd944e3e244b3617b908c4b8ce62ccc5992841a4ced5ad9956939118c"
	I0819 12:25:27.490369  506775 logs.go:123] Gathering logs for coredns [124128cbd1e5d2b757d30236510a2f7775b5f133017bdb53fc286fa7673de17c] ...
	I0819 12:25:27.490402  506775 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 124128cbd1e5d2b757d30236510a2f7775b5f133017bdb53fc286fa7673de17c"
	I0819 12:25:27.530854  506775 logs.go:123] Gathering logs for kube-scheduler [3f438763d4391ba5c2045a523f101f3454bd109ac70f3dd614d83dc21b9100d7] ...
	I0819 12:25:27.530920  506775 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3f438763d4391ba5c2045a523f101f3454bd109ac70f3dd614d83dc21b9100d7"
	I0819 12:25:27.594531  506775 logs.go:123] Gathering logs for kube-controller-manager [db269d4aaac2f76ebe4d954bbc95bc91cd0db32b72b8cc6710bfea0916a990c8] ...
	I0819 12:25:27.594567  506775 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 db269d4aaac2f76ebe4d954bbc95bc91cd0db32b72b8cc6710bfea0916a990c8"
	I0819 12:25:27.663122  506775 logs.go:123] Gathering logs for storage-provisioner [c9b68086ed65f933fa0c022f76cfa10a03814df0ff78fc874b3e42cba636fc7b] ...
	I0819 12:25:27.663157  506775 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c9b68086ed65f933fa0c022f76cfa10a03814df0ff78fc874b3e42cba636fc7b"
	I0819 12:25:27.704660  506775 logs.go:123] Gathering logs for kube-apiserver [3adc6cce17e637725838624966046f6786a930b68fc69c98675834705549597f] ...
	I0819 12:25:27.704685  506775 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3adc6cce17e637725838624966046f6786a930b68fc69c98675834705549597f"
	I0819 12:25:27.752250  506775 logs.go:123] Gathering logs for etcd [1221ed0d57efc908de20b1a0bfc703ea95699d6db4f60b5d2d2fc087d37714bf] ...
	I0819 12:25:27.752285  506775 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1221ed0d57efc908de20b1a0bfc703ea95699d6db4f60b5d2d2fc087d37714bf"
	I0819 12:25:27.806415  506775 logs.go:123] Gathering logs for kube-proxy [16543fc65a1be3182fec4ab65415b08c4149b41f875d21e7e70c4d3f9c5035e4] ...
	I0819 12:25:27.806444  506775 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 16543fc65a1be3182fec4ab65415b08c4149b41f875d21e7e70c4d3f9c5035e4"
	I0819 12:25:27.850064  506775 logs.go:123] Gathering logs for kubelet ...
	I0819 12:25:27.850094  506775 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 12:25:27.931483  506775 logs.go:123] Gathering logs for describe nodes ...
	I0819 12:25:27.931518  506775 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 12:25:28.093938  506775 logs.go:123] Gathering logs for kube-apiserver [8bbf4db80884ca68121db5cd324106134f008d55ceebaa2d085ff9a78d6bc836] ...
	I0819 12:25:28.093968  506775 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8bbf4db80884ca68121db5cd324106134f008d55ceebaa2d085ff9a78d6bc836"
	I0819 12:25:28.151969  506775 logs.go:123] Gathering logs for coredns [4175b9236449c47bc76b55d1b66e0a2f5eb6a494d652e0d61e523de3c052979a] ...
	I0819 12:25:28.152002  506775 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4175b9236449c47bc76b55d1b66e0a2f5eb6a494d652e0d61e523de3c052979a"
	I0819 12:25:28.198526  506775 logs.go:123] Gathering logs for kube-proxy [0bb7433777c470db1fa11e0481317ae603e85048c1e3b61a8cb0df0509de2f96] ...
	I0819 12:25:28.198556  506775 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0bb7433777c470db1fa11e0481317ae603e85048c1e3b61a8cb0df0509de2f96"
	I0819 12:25:28.239287  506775 logs.go:123] Gathering logs for kube-controller-manager [80421881617fc159def3bb11f59a8c750e63de334013cda4ebb677a69409f401] ...
	I0819 12:25:28.239317  506775 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 80421881617fc159def3bb11f59a8c750e63de334013cda4ebb677a69409f401"
	I0819 12:25:28.301532  506775 logs.go:123] Gathering logs for kindnet [49333be0a29a6e8ff2271ac54c2ce23bcf06d3d198767a64a31ac123da942f2a] ...
	I0819 12:25:28.301570  506775 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 49333be0a29a6e8ff2271ac54c2ce23bcf06d3d198767a64a31ac123da942f2a"
	I0819 12:25:28.375801  506775 logs.go:123] Gathering logs for containerd ...
	I0819 12:25:28.375836  506775 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0819 12:25:28.444180  506775 logs.go:123] Gathering logs for dmesg ...
	I0819 12:25:28.444220  506775 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 12:25:28.463704  506775 logs.go:123] Gathering logs for container status ...
	I0819 12:25:28.463782  506775 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 12:25:28.527829  506775 logs.go:123] Gathering logs for kindnet [597e793835a348134dae58cb28cbf8c7c5afb6b55ab77c57a898610b0036ab29] ...
	I0819 12:25:28.527907  506775 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 597e793835a348134dae58cb28cbf8c7c5afb6b55ab77c57a898610b0036ab29"
	I0819 12:25:28.576035  506775 logs.go:123] Gathering logs for kubernetes-dashboard [408674f5d24a3f0915a71e88dbefe58dbeadf401736b664e699d803b2bfad6e6] ...
	I0819 12:25:28.576070  506775 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 408674f5d24a3f0915a71e88dbefe58dbeadf401736b664e699d803b2bfad6e6"
	I0819 12:25:28.620438  506775 logs.go:123] Gathering logs for storage-provisioner [cc0e89380ed7e7194ad6d7e8e1bb072d2ee34ed9e8621cc957f55e8a88be96d6] ...
	I0819 12:25:28.620519  506775 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cc0e89380ed7e7194ad6d7e8e1bb072d2ee34ed9e8621cc957f55e8a88be96d6"
	I0819 12:25:28.658110  506775 logs.go:123] Gathering logs for kube-scheduler [6baa26f8742a8686343596d2fa2f37869dfdf0f779921454b0f514549aad19c8] ...
	I0819 12:25:28.658179  506775 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6baa26f8742a8686343596d2fa2f37869dfdf0f779921454b0f514549aad19c8"
	I0819 12:25:25.058776  501046 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 12:25:25.072234  501046 api_server.go:72] duration metric: took 5m52.234152061s to wait for apiserver process to appear ...
	I0819 12:25:25.072260  501046 api_server.go:88] waiting for apiserver healthz status ...
	I0819 12:25:25.072299  501046 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0819 12:25:25.072360  501046 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 12:25:25.126268  501046 cri.go:89] found id: "4ce61d87754c1a74c50315141a3956f04c053f6e9bf8ed92eb2f1d41f61bac61"
	I0819 12:25:25.126304  501046 cri.go:89] found id: "448906379c25acacfbe73890ef79d2faf13a76a8f18880099fda6187c53b0152"
	I0819 12:25:25.126309  501046 cri.go:89] found id: ""
	I0819 12:25:25.126317  501046 logs.go:276] 2 containers: [4ce61d87754c1a74c50315141a3956f04c053f6e9bf8ed92eb2f1d41f61bac61 448906379c25acacfbe73890ef79d2faf13a76a8f18880099fda6187c53b0152]
	I0819 12:25:25.126384  501046 ssh_runner.go:195] Run: which crictl
	I0819 12:25:25.130959  501046 ssh_runner.go:195] Run: which crictl
	I0819 12:25:25.135147  501046 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0819 12:25:25.135222  501046 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 12:25:25.179964  501046 cri.go:89] found id: "f2b278acf70fb649b3b13d726b38e5b951b7950e84be6176979a50e06c284635"
	I0819 12:25:25.179989  501046 cri.go:89] found id: "f96bb26d0d9fd54167ea4eedbda32851e7e6ed986c5d18edddbfb9d015c80aa8"
	I0819 12:25:25.179995  501046 cri.go:89] found id: ""
	I0819 12:25:25.180003  501046 logs.go:276] 2 containers: [f2b278acf70fb649b3b13d726b38e5b951b7950e84be6176979a50e06c284635 f96bb26d0d9fd54167ea4eedbda32851e7e6ed986c5d18edddbfb9d015c80aa8]
	I0819 12:25:25.180069  501046 ssh_runner.go:195] Run: which crictl
	I0819 12:25:25.184388  501046 ssh_runner.go:195] Run: which crictl
	I0819 12:25:25.188376  501046 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0819 12:25:25.188454  501046 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 12:25:25.236043  501046 cri.go:89] found id: "a9d449177d2f20bb4e0279df0064dc677dffa2194c0bfd6deb8af6688e466210"
	I0819 12:25:25.236069  501046 cri.go:89] found id: "52772681d7f9b14520c48d8c42d715a82c430226dad2db97bca20cde5180797d"
	I0819 12:25:25.236076  501046 cri.go:89] found id: ""
	I0819 12:25:25.236084  501046 logs.go:276] 2 containers: [a9d449177d2f20bb4e0279df0064dc677dffa2194c0bfd6deb8af6688e466210 52772681d7f9b14520c48d8c42d715a82c430226dad2db97bca20cde5180797d]
	I0819 12:25:25.236146  501046 ssh_runner.go:195] Run: which crictl
	I0819 12:25:25.239980  501046 ssh_runner.go:195] Run: which crictl
	I0819 12:25:25.243901  501046 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0819 12:25:25.243981  501046 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 12:25:25.289379  501046 cri.go:89] found id: "309ceea1b6362201d696baaee9f77608d461fce74f091280d398876fad125712"
	I0819 12:25:25.289399  501046 cri.go:89] found id: "1ef419f5f067970c35ce55d03d0e7a36fdebdd452881d8f478b7e537af217a32"
	I0819 12:25:25.289404  501046 cri.go:89] found id: ""
	I0819 12:25:25.289411  501046 logs.go:276] 2 containers: [309ceea1b6362201d696baaee9f77608d461fce74f091280d398876fad125712 1ef419f5f067970c35ce55d03d0e7a36fdebdd452881d8f478b7e537af217a32]
	I0819 12:25:25.289473  501046 ssh_runner.go:195] Run: which crictl
	I0819 12:25:25.293422  501046 ssh_runner.go:195] Run: which crictl
	I0819 12:25:25.297200  501046 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0819 12:25:25.297273  501046 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 12:25:25.355487  501046 cri.go:89] found id: "8581310ffb6da62d730c00416e9d418c1fd194d0459e551c98e90cd0193dc9be"
	I0819 12:25:25.355511  501046 cri.go:89] found id: "495863fa417577517a6659a9363e132d473fde25375c55ba292884732c5b5cc6"
	I0819 12:25:25.355516  501046 cri.go:89] found id: ""
	I0819 12:25:25.355523  501046 logs.go:276] 2 containers: [8581310ffb6da62d730c00416e9d418c1fd194d0459e551c98e90cd0193dc9be 495863fa417577517a6659a9363e132d473fde25375c55ba292884732c5b5cc6]
	I0819 12:25:25.355580  501046 ssh_runner.go:195] Run: which crictl
	I0819 12:25:25.359673  501046 ssh_runner.go:195] Run: which crictl
	I0819 12:25:25.363763  501046 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 12:25:25.363845  501046 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 12:25:25.407918  501046 cri.go:89] found id: "ff22b0055b8eb8b604f52a3bf6c5df44fc6b0ff5546a72d4f2b20b85080af524"
	I0819 12:25:25.407998  501046 cri.go:89] found id: "b8c6ba6c65d67f62a14421151c3013537c37cfcf1bc0b08d90d27bda4241f566"
	I0819 12:25:25.408009  501046 cri.go:89] found id: ""
	I0819 12:25:25.408017  501046 logs.go:276] 2 containers: [ff22b0055b8eb8b604f52a3bf6c5df44fc6b0ff5546a72d4f2b20b85080af524 b8c6ba6c65d67f62a14421151c3013537c37cfcf1bc0b08d90d27bda4241f566]
	I0819 12:25:25.408087  501046 ssh_runner.go:195] Run: which crictl
	I0819 12:25:25.412111  501046 ssh_runner.go:195] Run: which crictl
	I0819 12:25:25.416354  501046 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0819 12:25:25.416467  501046 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 12:25:25.465412  501046 cri.go:89] found id: "312b3b2145bf1c233ccc80613e0c67129fa905992c3e22c9a71929f05341b98f"
	I0819 12:25:25.465436  501046 cri.go:89] found id: "ff792901aeab0bc28facb26abfa78879ae3a7f0e523d2f2d7a83d2138d80c10b"
	I0819 12:25:25.465441  501046 cri.go:89] found id: ""
	I0819 12:25:25.465449  501046 logs.go:276] 2 containers: [312b3b2145bf1c233ccc80613e0c67129fa905992c3e22c9a71929f05341b98f ff792901aeab0bc28facb26abfa78879ae3a7f0e523d2f2d7a83d2138d80c10b]
	I0819 12:25:25.465538  501046 ssh_runner.go:195] Run: which crictl
	I0819 12:25:25.469407  501046 ssh_runner.go:195] Run: which crictl
	I0819 12:25:25.473071  501046 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0819 12:25:25.473191  501046 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0819 12:25:25.521059  501046 cri.go:89] found id: "d6ba97b27a6fcb89c2d05f135c98e06be837786617de7736914f6711ab33c929"
	I0819 12:25:25.521081  501046 cri.go:89] found id: "7a63c07299e71c1920b148c4c2cd68ce0fc64d5359eb08ebc374e073275d266b"
	I0819 12:25:25.521086  501046 cri.go:89] found id: ""
	I0819 12:25:25.521094  501046 logs.go:276] 2 containers: [d6ba97b27a6fcb89c2d05f135c98e06be837786617de7736914f6711ab33c929 7a63c07299e71c1920b148c4c2cd68ce0fc64d5359eb08ebc374e073275d266b]
	I0819 12:25:25.521154  501046 ssh_runner.go:195] Run: which crictl
	I0819 12:25:25.525152  501046 ssh_runner.go:195] Run: which crictl
	I0819 12:25:25.528765  501046 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 12:25:25.528852  501046 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 12:25:25.568893  501046 cri.go:89] found id: "e333f18f594f1e8bfc8059d2d10fca8e2977c7d6931e0738413013bc0a844e60"
	I0819 12:25:25.568965  501046 cri.go:89] found id: ""
	I0819 12:25:25.568986  501046 logs.go:276] 1 containers: [e333f18f594f1e8bfc8059d2d10fca8e2977c7d6931e0738413013bc0a844e60]
	I0819 12:25:25.569076  501046 ssh_runner.go:195] Run: which crictl
	I0819 12:25:25.573043  501046 logs.go:123] Gathering logs for kube-scheduler [1ef419f5f067970c35ce55d03d0e7a36fdebdd452881d8f478b7e537af217a32] ...
	I0819 12:25:25.573094  501046 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1ef419f5f067970c35ce55d03d0e7a36fdebdd452881d8f478b7e537af217a32"
	I0819 12:25:25.620654  501046 logs.go:123] Gathering logs for kindnet [312b3b2145bf1c233ccc80613e0c67129fa905992c3e22c9a71929f05341b98f] ...
	I0819 12:25:25.620685  501046 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 312b3b2145bf1c233ccc80613e0c67129fa905992c3e22c9a71929f05341b98f"
	I0819 12:25:25.686239  501046 logs.go:123] Gathering logs for kindnet [ff792901aeab0bc28facb26abfa78879ae3a7f0e523d2f2d7a83d2138d80c10b] ...
	I0819 12:25:25.686276  501046 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ff792901aeab0bc28facb26abfa78879ae3a7f0e523d2f2d7a83d2138d80c10b"
	I0819 12:25:25.736109  501046 logs.go:123] Gathering logs for describe nodes ...
	I0819 12:25:25.736143  501046 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 12:25:25.900489  501046 logs.go:123] Gathering logs for kube-apiserver [4ce61d87754c1a74c50315141a3956f04c053f6e9bf8ed92eb2f1d41f61bac61] ...
	I0819 12:25:25.900519  501046 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4ce61d87754c1a74c50315141a3956f04c053f6e9bf8ed92eb2f1d41f61bac61"
	I0819 12:25:25.972186  501046 logs.go:123] Gathering logs for etcd [f2b278acf70fb649b3b13d726b38e5b951b7950e84be6176979a50e06c284635] ...
	I0819 12:25:25.972220  501046 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f2b278acf70fb649b3b13d726b38e5b951b7950e84be6176979a50e06c284635"
	I0819 12:25:26.029587  501046 logs.go:123] Gathering logs for coredns [52772681d7f9b14520c48d8c42d715a82c430226dad2db97bca20cde5180797d] ...
	I0819 12:25:26.029664  501046 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 52772681d7f9b14520c48d8c42d715a82c430226dad2db97bca20cde5180797d"
	I0819 12:25:26.074199  501046 logs.go:123] Gathering logs for kube-scheduler [309ceea1b6362201d696baaee9f77608d461fce74f091280d398876fad125712] ...
	I0819 12:25:26.074229  501046 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 309ceea1b6362201d696baaee9f77608d461fce74f091280d398876fad125712"
	I0819 12:25:26.117377  501046 logs.go:123] Gathering logs for kubernetes-dashboard [e333f18f594f1e8bfc8059d2d10fca8e2977c7d6931e0738413013bc0a844e60] ...
	I0819 12:25:26.117408  501046 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e333f18f594f1e8bfc8059d2d10fca8e2977c7d6931e0738413013bc0a844e60"
	I0819 12:25:26.156566  501046 logs.go:123] Gathering logs for dmesg ...
	I0819 12:25:26.156593  501046 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 12:25:26.173495  501046 logs.go:123] Gathering logs for kube-apiserver [448906379c25acacfbe73890ef79d2faf13a76a8f18880099fda6187c53b0152] ...
	I0819 12:25:26.173522  501046 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 448906379c25acacfbe73890ef79d2faf13a76a8f18880099fda6187c53b0152"
	I0819 12:25:26.254028  501046 logs.go:123] Gathering logs for coredns [a9d449177d2f20bb4e0279df0064dc677dffa2194c0bfd6deb8af6688e466210] ...
	I0819 12:25:26.254063  501046 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a9d449177d2f20bb4e0279df0064dc677dffa2194c0bfd6deb8af6688e466210"
	I0819 12:25:26.297887  501046 logs.go:123] Gathering logs for kube-controller-manager [ff22b0055b8eb8b604f52a3bf6c5df44fc6b0ff5546a72d4f2b20b85080af524] ...
	I0819 12:25:26.297918  501046 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ff22b0055b8eb8b604f52a3bf6c5df44fc6b0ff5546a72d4f2b20b85080af524"
	I0819 12:25:26.375578  501046 logs.go:123] Gathering logs for kube-controller-manager [b8c6ba6c65d67f62a14421151c3013537c37cfcf1bc0b08d90d27bda4241f566] ...
	I0819 12:25:26.375616  501046 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b8c6ba6c65d67f62a14421151c3013537c37cfcf1bc0b08d90d27bda4241f566"
	I0819 12:25:26.464701  501046 logs.go:123] Gathering logs for kubelet ...
	I0819 12:25:26.464745  501046 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0819 12:25:26.519991  501046 logs.go:138] Found kubelet problem: Aug 19 12:19:48 old-k8s-version-091610 kubelet[673]: E0819 12:19:48.865111     673 reflector.go:138] object-"default"/"default-token-ddbn8": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "default-token-ddbn8" is forbidden: User "system:node:old-k8s-version-091610" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'old-k8s-version-091610' and this object
	W0819 12:25:26.520228  501046 logs.go:138] Found kubelet problem: Aug 19 12:19:48 old-k8s-version-091610 kubelet[673]: E0819 12:19:48.865470     673 reflector.go:138] object-"kube-system"/"coredns-token-24w5r": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "coredns-token-24w5r" is forbidden: User "system:node:old-k8s-version-091610" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-091610' and this object
	W0819 12:25:26.520440  501046 logs.go:138] Found kubelet problem: Aug 19 12:19:48 old-k8s-version-091610 kubelet[673]: E0819 12:19:48.865674     673 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:old-k8s-version-091610" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-091610' and this object
	W0819 12:25:26.520655  501046 logs.go:138] Found kubelet problem: Aug 19 12:19:48 old-k8s-version-091610 kubelet[673]: E0819 12:19:48.865769     673 reflector.go:138] object-"kube-system"/"kindnet-token-45phz": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kindnet-token-45phz" is forbidden: User "system:node:old-k8s-version-091610" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-091610' and this object
	W0819 12:25:26.520872  501046 logs.go:138] Found kubelet problem: Aug 19 12:19:48 old-k8s-version-091610 kubelet[673]: E0819 12:19:48.866431     673 reflector.go:138] object-"kube-system"/"kube-proxy-token-6m5lt": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kube-proxy-token-6m5lt" is forbidden: User "system:node:old-k8s-version-091610" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-091610' and this object
	W0819 12:25:26.521102  501046 logs.go:138] Found kubelet problem: Aug 19 12:19:48 old-k8s-version-091610 kubelet[673]: E0819 12:19:48.866509     673 reflector.go:138] object-"kube-system"/"storage-provisioner-token-lvtph": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "storage-provisioner-token-lvtph" is forbidden: User "system:node:old-k8s-version-091610" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-091610' and this object
	W0819 12:25:26.521341  501046 logs.go:138] Found kubelet problem: Aug 19 12:19:48 old-k8s-version-091610 kubelet[673]: E0819 12:19:48.866580     673 reflector.go:138] object-"kube-system"/"metrics-server-token-hgch9": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "metrics-server-token-hgch9" is forbidden: User "system:node:old-k8s-version-091610" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-091610' and this object
	W0819 12:25:26.521548  501046 logs.go:138] Found kubelet problem: Aug 19 12:19:48 old-k8s-version-091610 kubelet[673]: E0819 12:19:48.866601     673 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:old-k8s-version-091610" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-091610' and this object
	W0819 12:25:26.529491  501046 logs.go:138] Found kubelet problem: Aug 19 12:19:51 old-k8s-version-091610 kubelet[673]: E0819 12:19:51.439458     673 pod_workers.go:191] Error syncing pod 4085e2df-7e89-44a8-b234-c4b001bdff1d ("metrics-server-9975d5f86-zb7nt_kube-system(4085e2df-7e89-44a8-b234-c4b001bdff1d)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W0819 12:25:26.531105  501046 logs.go:138] Found kubelet problem: Aug 19 12:19:51 old-k8s-version-091610 kubelet[673]: E0819 12:19:51.960458     673 pod_workers.go:191] Error syncing pod 4085e2df-7e89-44a8-b234-c4b001bdff1d ("metrics-server-9975d5f86-zb7nt_kube-system(4085e2df-7e89-44a8-b234-c4b001bdff1d)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0819 12:25:26.533946  501046 logs.go:138] Found kubelet problem: Aug 19 12:20:04 old-k8s-version-091610 kubelet[673]: E0819 12:20:04.809492     673 pod_workers.go:191] Error syncing pod 4085e2df-7e89-44a8-b234-c4b001bdff1d ("metrics-server-9975d5f86-zb7nt_kube-system(4085e2df-7e89-44a8-b234-c4b001bdff1d)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W0819 12:25:26.535831  501046 logs.go:138] Found kubelet problem: Aug 19 12:20:15 old-k8s-version-091610 kubelet[673]: E0819 12:20:15.775810     673 pod_workers.go:191] Error syncing pod 4085e2df-7e89-44a8-b234-c4b001bdff1d ("metrics-server-9975d5f86-zb7nt_kube-system(4085e2df-7e89-44a8-b234-c4b001bdff1d)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0819 12:25:26.536295  501046 logs.go:138] Found kubelet problem: Aug 19 12:20:16 old-k8s-version-091610 kubelet[673]: E0819 12:20:16.146247     673 pod_workers.go:191] Error syncing pod 67d879d9-c2b3-4d91-8855-4f6007f01c6e ("dashboard-metrics-scraper-8d5bb5db8-kgs2g_kubernetes-dashboard(67d879d9-c2b3-4d91-8855-4f6007f01c6e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kgs2g_kubernetes-dashboard(67d879d9-c2b3-4d91-8855-4f6007f01c6e)"
	W0819 12:25:26.536631  501046 logs.go:138] Found kubelet problem: Aug 19 12:20:17 old-k8s-version-091610 kubelet[673]: E0819 12:20:17.149552     673 pod_workers.go:191] Error syncing pod 67d879d9-c2b3-4d91-8855-4f6007f01c6e ("dashboard-metrics-scraper-8d5bb5db8-kgs2g_kubernetes-dashboard(67d879d9-c2b3-4d91-8855-4f6007f01c6e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kgs2g_kubernetes-dashboard(67d879d9-c2b3-4d91-8855-4f6007f01c6e)"
	W0819 12:25:26.536963  501046 logs.go:138] Found kubelet problem: Aug 19 12:20:18 old-k8s-version-091610 kubelet[673]: E0819 12:20:18.503760     673 pod_workers.go:191] Error syncing pod 67d879d9-c2b3-4d91-8855-4f6007f01c6e ("dashboard-metrics-scraper-8d5bb5db8-kgs2g_kubernetes-dashboard(67d879d9-c2b3-4d91-8855-4f6007f01c6e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kgs2g_kubernetes-dashboard(67d879d9-c2b3-4d91-8855-4f6007f01c6e)"
	W0819 12:25:26.539794  501046 logs.go:138] Found kubelet problem: Aug 19 12:20:27 old-k8s-version-091610 kubelet[673]: E0819 12:20:27.783794     673 pod_workers.go:191] Error syncing pod 4085e2df-7e89-44a8-b234-c4b001bdff1d ("metrics-server-9975d5f86-zb7nt_kube-system(4085e2df-7e89-44a8-b234-c4b001bdff1d)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W0819 12:25:26.540740  501046 logs.go:138] Found kubelet problem: Aug 19 12:20:34 old-k8s-version-091610 kubelet[673]: E0819 12:20:34.224236     673 pod_workers.go:191] Error syncing pod 67d879d9-c2b3-4d91-8855-4f6007f01c6e ("dashboard-metrics-scraper-8d5bb5db8-kgs2g_kubernetes-dashboard(67d879d9-c2b3-4d91-8855-4f6007f01c6e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kgs2g_kubernetes-dashboard(67d879d9-c2b3-4d91-8855-4f6007f01c6e)"
	W0819 12:25:26.541073  501046 logs.go:138] Found kubelet problem: Aug 19 12:20:38 old-k8s-version-091610 kubelet[673]: E0819 12:20:38.503822     673 pod_workers.go:191] Error syncing pod 67d879d9-c2b3-4d91-8855-4f6007f01c6e ("dashboard-metrics-scraper-8d5bb5db8-kgs2g_kubernetes-dashboard(67d879d9-c2b3-4d91-8855-4f6007f01c6e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kgs2g_kubernetes-dashboard(67d879d9-c2b3-4d91-8855-4f6007f01c6e)"
	W0819 12:25:26.541260  501046 logs.go:138] Found kubelet problem: Aug 19 12:20:41 old-k8s-version-091610 kubelet[673]: E0819 12:20:41.779721     673 pod_workers.go:191] Error syncing pod 4085e2df-7e89-44a8-b234-c4b001bdff1d ("metrics-server-9975d5f86-zb7nt_kube-system(4085e2df-7e89-44a8-b234-c4b001bdff1d)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0819 12:25:26.541590  501046 logs.go:138] Found kubelet problem: Aug 19 12:20:49 old-k8s-version-091610 kubelet[673]: E0819 12:20:49.769540     673 pod_workers.go:191] Error syncing pod 67d879d9-c2b3-4d91-8855-4f6007f01c6e ("dashboard-metrics-scraper-8d5bb5db8-kgs2g_kubernetes-dashboard(67d879d9-c2b3-4d91-8855-4f6007f01c6e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kgs2g_kubernetes-dashboard(67d879d9-c2b3-4d91-8855-4f6007f01c6e)"
	W0819 12:25:26.541778  501046 logs.go:138] Found kubelet problem: Aug 19 12:20:53 old-k8s-version-091610 kubelet[673]: E0819 12:20:53.783516     673 pod_workers.go:191] Error syncing pod 4085e2df-7e89-44a8-b234-c4b001bdff1d ("metrics-server-9975d5f86-zb7nt_kube-system(4085e2df-7e89-44a8-b234-c4b001bdff1d)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0819 12:25:26.542378  501046 logs.go:138] Found kubelet problem: Aug 19 12:21:01 old-k8s-version-091610 kubelet[673]: E0819 12:21:01.379793     673 pod_workers.go:191] Error syncing pod 67d879d9-c2b3-4d91-8855-4f6007f01c6e ("dashboard-metrics-scraper-8d5bb5db8-kgs2g_kubernetes-dashboard(67d879d9-c2b3-4d91-8855-4f6007f01c6e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kgs2g_kubernetes-dashboard(67d879d9-c2b3-4d91-8855-4f6007f01c6e)"
	W0819 12:25:26.542565  501046 logs.go:138] Found kubelet problem: Aug 19 12:21:05 old-k8s-version-091610 kubelet[673]: E0819 12:21:05.770046     673 pod_workers.go:191] Error syncing pod 4085e2df-7e89-44a8-b234-c4b001bdff1d ("metrics-server-9975d5f86-zb7nt_kube-system(4085e2df-7e89-44a8-b234-c4b001bdff1d)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0819 12:25:26.542903  501046 logs.go:138] Found kubelet problem: Aug 19 12:21:08 old-k8s-version-091610 kubelet[673]: E0819 12:21:08.504303     673 pod_workers.go:191] Error syncing pod 67d879d9-c2b3-4d91-8855-4f6007f01c6e ("dashboard-metrics-scraper-8d5bb5db8-kgs2g_kubernetes-dashboard(67d879d9-c2b3-4d91-8855-4f6007f01c6e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kgs2g_kubernetes-dashboard(67d879d9-c2b3-4d91-8855-4f6007f01c6e)"
	W0819 12:25:26.545363  501046 logs.go:138] Found kubelet problem: Aug 19 12:21:19 old-k8s-version-091610 kubelet[673]: E0819 12:21:19.779165     673 pod_workers.go:191] Error syncing pod 4085e2df-7e89-44a8-b234-c4b001bdff1d ("metrics-server-9975d5f86-zb7nt_kube-system(4085e2df-7e89-44a8-b234-c4b001bdff1d)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W0819 12:25:26.545698  501046 logs.go:138] Found kubelet problem: Aug 19 12:21:22 old-k8s-version-091610 kubelet[673]: E0819 12:21:22.771924     673 pod_workers.go:191] Error syncing pod 67d879d9-c2b3-4d91-8855-4f6007f01c6e ("dashboard-metrics-scraper-8d5bb5db8-kgs2g_kubernetes-dashboard(67d879d9-c2b3-4d91-8855-4f6007f01c6e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kgs2g_kubernetes-dashboard(67d879d9-c2b3-4d91-8855-4f6007f01c6e)"
	W0819 12:25:26.545890  501046 logs.go:138] Found kubelet problem: Aug 19 12:21:31 old-k8s-version-091610 kubelet[673]: E0819 12:21:31.769891     673 pod_workers.go:191] Error syncing pod 4085e2df-7e89-44a8-b234-c4b001bdff1d ("metrics-server-9975d5f86-zb7nt_kube-system(4085e2df-7e89-44a8-b234-c4b001bdff1d)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0819 12:25:26.546222  501046 logs.go:138] Found kubelet problem: Aug 19 12:21:36 old-k8s-version-091610 kubelet[673]: E0819 12:21:36.770025     673 pod_workers.go:191] Error syncing pod 67d879d9-c2b3-4d91-8855-4f6007f01c6e ("dashboard-metrics-scraper-8d5bb5db8-kgs2g_kubernetes-dashboard(67d879d9-c2b3-4d91-8855-4f6007f01c6e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kgs2g_kubernetes-dashboard(67d879d9-c2b3-4d91-8855-4f6007f01c6e)"
	W0819 12:25:26.546434  501046 logs.go:138] Found kubelet problem: Aug 19 12:21:46 old-k8s-version-091610 kubelet[673]: E0819 12:21:46.779430     673 pod_workers.go:191] Error syncing pod 4085e2df-7e89-44a8-b234-c4b001bdff1d ("metrics-server-9975d5f86-zb7nt_kube-system(4085e2df-7e89-44a8-b234-c4b001bdff1d)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0819 12:25:26.547032  501046 logs.go:138] Found kubelet problem: Aug 19 12:21:48 old-k8s-version-091610 kubelet[673]: E0819 12:21:48.496386     673 pod_workers.go:191] Error syncing pod 67d879d9-c2b3-4d91-8855-4f6007f01c6e ("dashboard-metrics-scraper-8d5bb5db8-kgs2g_kubernetes-dashboard(67d879d9-c2b3-4d91-8855-4f6007f01c6e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kgs2g_kubernetes-dashboard(67d879d9-c2b3-4d91-8855-4f6007f01c6e)"
	W0819 12:25:26.547368  501046 logs.go:138] Found kubelet problem: Aug 19 12:21:49 old-k8s-version-091610 kubelet[673]: E0819 12:21:49.499775     673 pod_workers.go:191] Error syncing pod 67d879d9-c2b3-4d91-8855-4f6007f01c6e ("dashboard-metrics-scraper-8d5bb5db8-kgs2g_kubernetes-dashboard(67d879d9-c2b3-4d91-8855-4f6007f01c6e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kgs2g_kubernetes-dashboard(67d879d9-c2b3-4d91-8855-4f6007f01c6e)"
	W0819 12:25:26.547557  501046 logs.go:138] Found kubelet problem: Aug 19 12:21:59 old-k8s-version-091610 kubelet[673]: E0819 12:21:59.770059     673 pod_workers.go:191] Error syncing pod 4085e2df-7e89-44a8-b234-c4b001bdff1d ("metrics-server-9975d5f86-zb7nt_kube-system(4085e2df-7e89-44a8-b234-c4b001bdff1d)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0819 12:25:26.547890  501046 logs.go:138] Found kubelet problem: Aug 19 12:22:01 old-k8s-version-091610 kubelet[673]: E0819 12:22:01.769614     673 pod_workers.go:191] Error syncing pod 67d879d9-c2b3-4d91-8855-4f6007f01c6e ("dashboard-metrics-scraper-8d5bb5db8-kgs2g_kubernetes-dashboard(67d879d9-c2b3-4d91-8855-4f6007f01c6e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kgs2g_kubernetes-dashboard(67d879d9-c2b3-4d91-8855-4f6007f01c6e)"
	W0819 12:25:26.548076  501046 logs.go:138] Found kubelet problem: Aug 19 12:22:11 old-k8s-version-091610 kubelet[673]: E0819 12:22:11.770374     673 pod_workers.go:191] Error syncing pod 4085e2df-7e89-44a8-b234-c4b001bdff1d ("metrics-server-9975d5f86-zb7nt_kube-system(4085e2df-7e89-44a8-b234-c4b001bdff1d)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0819 12:25:26.548409  501046 logs.go:138] Found kubelet problem: Aug 19 12:22:12 old-k8s-version-091610 kubelet[673]: E0819 12:22:12.769816     673 pod_workers.go:191] Error syncing pod 67d879d9-c2b3-4d91-8855-4f6007f01c6e ("dashboard-metrics-scraper-8d5bb5db8-kgs2g_kubernetes-dashboard(67d879d9-c2b3-4d91-8855-4f6007f01c6e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kgs2g_kubernetes-dashboard(67d879d9-c2b3-4d91-8855-4f6007f01c6e)"
	W0819 12:25:26.548740  501046 logs.go:138] Found kubelet problem: Aug 19 12:22:23 old-k8s-version-091610 kubelet[673]: E0819 12:22:23.769634     673 pod_workers.go:191] Error syncing pod 67d879d9-c2b3-4d91-8855-4f6007f01c6e ("dashboard-metrics-scraper-8d5bb5db8-kgs2g_kubernetes-dashboard(67d879d9-c2b3-4d91-8855-4f6007f01c6e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kgs2g_kubernetes-dashboard(67d879d9-c2b3-4d91-8855-4f6007f01c6e)"
	W0819 12:25:26.548929  501046 logs.go:138] Found kubelet problem: Aug 19 12:22:25 old-k8s-version-091610 kubelet[673]: E0819 12:22:25.770010     673 pod_workers.go:191] Error syncing pod 4085e2df-7e89-44a8-b234-c4b001bdff1d ("metrics-server-9975d5f86-zb7nt_kube-system(4085e2df-7e89-44a8-b234-c4b001bdff1d)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0819 12:25:26.549115  501046 logs.go:138] Found kubelet problem: Aug 19 12:22:36 old-k8s-version-091610 kubelet[673]: E0819 12:22:36.771414     673 pod_workers.go:191] Error syncing pod 4085e2df-7e89-44a8-b234-c4b001bdff1d ("metrics-server-9975d5f86-zb7nt_kube-system(4085e2df-7e89-44a8-b234-c4b001bdff1d)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0819 12:25:26.549448  501046 logs.go:138] Found kubelet problem: Aug 19 12:22:38 old-k8s-version-091610 kubelet[673]: E0819 12:22:38.769996     673 pod_workers.go:191] Error syncing pod 67d879d9-c2b3-4d91-8855-4f6007f01c6e ("dashboard-metrics-scraper-8d5bb5db8-kgs2g_kubernetes-dashboard(67d879d9-c2b3-4d91-8855-4f6007f01c6e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kgs2g_kubernetes-dashboard(67d879d9-c2b3-4d91-8855-4f6007f01c6e)"
	W0819 12:25:26.551930  501046 logs.go:138] Found kubelet problem: Aug 19 12:22:48 old-k8s-version-091610 kubelet[673]: E0819 12:22:48.780860     673 pod_workers.go:191] Error syncing pod 4085e2df-7e89-44a8-b234-c4b001bdff1d ("metrics-server-9975d5f86-zb7nt_kube-system(4085e2df-7e89-44a8-b234-c4b001bdff1d)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W0819 12:25:26.552266  501046 logs.go:138] Found kubelet problem: Aug 19 12:22:52 old-k8s-version-091610 kubelet[673]: E0819 12:22:52.770145     673 pod_workers.go:191] Error syncing pod 67d879d9-c2b3-4d91-8855-4f6007f01c6e ("dashboard-metrics-scraper-8d5bb5db8-kgs2g_kubernetes-dashboard(67d879d9-c2b3-4d91-8855-4f6007f01c6e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kgs2g_kubernetes-dashboard(67d879d9-c2b3-4d91-8855-4f6007f01c6e)"
	W0819 12:25:26.552458  501046 logs.go:138] Found kubelet problem: Aug 19 12:23:00 old-k8s-version-091610 kubelet[673]: E0819 12:23:00.770567     673 pod_workers.go:191] Error syncing pod 4085e2df-7e89-44a8-b234-c4b001bdff1d ("metrics-server-9975d5f86-zb7nt_kube-system(4085e2df-7e89-44a8-b234-c4b001bdff1d)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0819 12:25:26.552792  501046 logs.go:138] Found kubelet problem: Aug 19 12:23:07 old-k8s-version-091610 kubelet[673]: E0819 12:23:07.769578     673 pod_workers.go:191] Error syncing pod 67d879d9-c2b3-4d91-8855-4f6007f01c6e ("dashboard-metrics-scraper-8d5bb5db8-kgs2g_kubernetes-dashboard(67d879d9-c2b3-4d91-8855-4f6007f01c6e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kgs2g_kubernetes-dashboard(67d879d9-c2b3-4d91-8855-4f6007f01c6e)"
	W0819 12:25:26.552978  501046 logs.go:138] Found kubelet problem: Aug 19 12:23:12 old-k8s-version-091610 kubelet[673]: E0819 12:23:12.772710     673 pod_workers.go:191] Error syncing pod 4085e2df-7e89-44a8-b234-c4b001bdff1d ("metrics-server-9975d5f86-zb7nt_kube-system(4085e2df-7e89-44a8-b234-c4b001bdff1d)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0819 12:25:26.553577  501046 logs.go:138] Found kubelet problem: Aug 19 12:23:22 old-k8s-version-091610 kubelet[673]: E0819 12:23:22.733720     673 pod_workers.go:191] Error syncing pod 67d879d9-c2b3-4d91-8855-4f6007f01c6e ("dashboard-metrics-scraper-8d5bb5db8-kgs2g_kubernetes-dashboard(67d879d9-c2b3-4d91-8855-4f6007f01c6e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kgs2g_kubernetes-dashboard(67d879d9-c2b3-4d91-8855-4f6007f01c6e)"
	W0819 12:25:26.553765  501046 logs.go:138] Found kubelet problem: Aug 19 12:23:26 old-k8s-version-091610 kubelet[673]: E0819 12:23:26.770120     673 pod_workers.go:191] Error syncing pod 4085e2df-7e89-44a8-b234-c4b001bdff1d ("metrics-server-9975d5f86-zb7nt_kube-system(4085e2df-7e89-44a8-b234-c4b001bdff1d)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0819 12:25:26.554099  501046 logs.go:138] Found kubelet problem: Aug 19 12:23:28 old-k8s-version-091610 kubelet[673]: E0819 12:23:28.503831     673 pod_workers.go:191] Error syncing pod 67d879d9-c2b3-4d91-8855-4f6007f01c6e ("dashboard-metrics-scraper-8d5bb5db8-kgs2g_kubernetes-dashboard(67d879d9-c2b3-4d91-8855-4f6007f01c6e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kgs2g_kubernetes-dashboard(67d879d9-c2b3-4d91-8855-4f6007f01c6e)"
	W0819 12:25:26.554285  501046 logs.go:138] Found kubelet problem: Aug 19 12:23:38 old-k8s-version-091610 kubelet[673]: E0819 12:23:38.770038     673 pod_workers.go:191] Error syncing pod 4085e2df-7e89-44a8-b234-c4b001bdff1d ("metrics-server-9975d5f86-zb7nt_kube-system(4085e2df-7e89-44a8-b234-c4b001bdff1d)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0819 12:25:26.554623  501046 logs.go:138] Found kubelet problem: Aug 19 12:23:42 old-k8s-version-091610 kubelet[673]: E0819 12:23:42.769697     673 pod_workers.go:191] Error syncing pod 67d879d9-c2b3-4d91-8855-4f6007f01c6e ("dashboard-metrics-scraper-8d5bb5db8-kgs2g_kubernetes-dashboard(67d879d9-c2b3-4d91-8855-4f6007f01c6e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kgs2g_kubernetes-dashboard(67d879d9-c2b3-4d91-8855-4f6007f01c6e)"
	W0819 12:25:26.554811  501046 logs.go:138] Found kubelet problem: Aug 19 12:23:50 old-k8s-version-091610 kubelet[673]: E0819 12:23:50.774063     673 pod_workers.go:191] Error syncing pod 4085e2df-7e89-44a8-b234-c4b001bdff1d ("metrics-server-9975d5f86-zb7nt_kube-system(4085e2df-7e89-44a8-b234-c4b001bdff1d)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0819 12:25:26.555146  501046 logs.go:138] Found kubelet problem: Aug 19 12:23:57 old-k8s-version-091610 kubelet[673]: E0819 12:23:57.769849     673 pod_workers.go:191] Error syncing pod 67d879d9-c2b3-4d91-8855-4f6007f01c6e ("dashboard-metrics-scraper-8d5bb5db8-kgs2g_kubernetes-dashboard(67d879d9-c2b3-4d91-8855-4f6007f01c6e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kgs2g_kubernetes-dashboard(67d879d9-c2b3-4d91-8855-4f6007f01c6e)"
	W0819 12:25:26.555333  501046 logs.go:138] Found kubelet problem: Aug 19 12:24:03 old-k8s-version-091610 kubelet[673]: E0819 12:24:03.769943     673 pod_workers.go:191] Error syncing pod 4085e2df-7e89-44a8-b234-c4b001bdff1d ("metrics-server-9975d5f86-zb7nt_kube-system(4085e2df-7e89-44a8-b234-c4b001bdff1d)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0819 12:25:26.555663  501046 logs.go:138] Found kubelet problem: Aug 19 12:24:12 old-k8s-version-091610 kubelet[673]: E0819 12:24:12.770074     673 pod_workers.go:191] Error syncing pod 67d879d9-c2b3-4d91-8855-4f6007f01c6e ("dashboard-metrics-scraper-8d5bb5db8-kgs2g_kubernetes-dashboard(67d879d9-c2b3-4d91-8855-4f6007f01c6e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kgs2g_kubernetes-dashboard(67d879d9-c2b3-4d91-8855-4f6007f01c6e)"
	W0819 12:25:26.555850  501046 logs.go:138] Found kubelet problem: Aug 19 12:24:18 old-k8s-version-091610 kubelet[673]: E0819 12:24:18.770049     673 pod_workers.go:191] Error syncing pod 4085e2df-7e89-44a8-b234-c4b001bdff1d ("metrics-server-9975d5f86-zb7nt_kube-system(4085e2df-7e89-44a8-b234-c4b001bdff1d)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0819 12:25:26.556178  501046 logs.go:138] Found kubelet problem: Aug 19 12:24:27 old-k8s-version-091610 kubelet[673]: E0819 12:24:27.769551     673 pod_workers.go:191] Error syncing pod 67d879d9-c2b3-4d91-8855-4f6007f01c6e ("dashboard-metrics-scraper-8d5bb5db8-kgs2g_kubernetes-dashboard(67d879d9-c2b3-4d91-8855-4f6007f01c6e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kgs2g_kubernetes-dashboard(67d879d9-c2b3-4d91-8855-4f6007f01c6e)"
	W0819 12:25:26.556364  501046 logs.go:138] Found kubelet problem: Aug 19 12:24:32 old-k8s-version-091610 kubelet[673]: E0819 12:24:32.769991     673 pod_workers.go:191] Error syncing pod 4085e2df-7e89-44a8-b234-c4b001bdff1d ("metrics-server-9975d5f86-zb7nt_kube-system(4085e2df-7e89-44a8-b234-c4b001bdff1d)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0819 12:25:26.556695  501046 logs.go:138] Found kubelet problem: Aug 19 12:24:39 old-k8s-version-091610 kubelet[673]: E0819 12:24:39.769647     673 pod_workers.go:191] Error syncing pod 67d879d9-c2b3-4d91-8855-4f6007f01c6e ("dashboard-metrics-scraper-8d5bb5db8-kgs2g_kubernetes-dashboard(67d879d9-c2b3-4d91-8855-4f6007f01c6e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kgs2g_kubernetes-dashboard(67d879d9-c2b3-4d91-8855-4f6007f01c6e)"
	W0819 12:25:26.556880  501046 logs.go:138] Found kubelet problem: Aug 19 12:24:47 old-k8s-version-091610 kubelet[673]: E0819 12:24:47.769929     673 pod_workers.go:191] Error syncing pod 4085e2df-7e89-44a8-b234-c4b001bdff1d ("metrics-server-9975d5f86-zb7nt_kube-system(4085e2df-7e89-44a8-b234-c4b001bdff1d)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0819 12:25:26.557210  501046 logs.go:138] Found kubelet problem: Aug 19 12:24:52 old-k8s-version-091610 kubelet[673]: E0819 12:24:52.769598     673 pod_workers.go:191] Error syncing pod 67d879d9-c2b3-4d91-8855-4f6007f01c6e ("dashboard-metrics-scraper-8d5bb5db8-kgs2g_kubernetes-dashboard(67d879d9-c2b3-4d91-8855-4f6007f01c6e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kgs2g_kubernetes-dashboard(67d879d9-c2b3-4d91-8855-4f6007f01c6e)"
	W0819 12:25:26.557395  501046 logs.go:138] Found kubelet problem: Aug 19 12:25:02 old-k8s-version-091610 kubelet[673]: E0819 12:25:02.770111     673 pod_workers.go:191] Error syncing pod 4085e2df-7e89-44a8-b234-c4b001bdff1d ("metrics-server-9975d5f86-zb7nt_kube-system(4085e2df-7e89-44a8-b234-c4b001bdff1d)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0819 12:25:26.557725  501046 logs.go:138] Found kubelet problem: Aug 19 12:25:07 old-k8s-version-091610 kubelet[673]: E0819 12:25:07.769696     673 pod_workers.go:191] Error syncing pod 67d879d9-c2b3-4d91-8855-4f6007f01c6e ("dashboard-metrics-scraper-8d5bb5db8-kgs2g_kubernetes-dashboard(67d879d9-c2b3-4d91-8855-4f6007f01c6e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kgs2g_kubernetes-dashboard(67d879d9-c2b3-4d91-8855-4f6007f01c6e)"
	W0819 12:25:26.557911  501046 logs.go:138] Found kubelet problem: Aug 19 12:25:16 old-k8s-version-091610 kubelet[673]: E0819 12:25:16.770077     673 pod_workers.go:191] Error syncing pod 4085e2df-7e89-44a8-b234-c4b001bdff1d ("metrics-server-9975d5f86-zb7nt_kube-system(4085e2df-7e89-44a8-b234-c4b001bdff1d)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0819 12:25:26.558246  501046 logs.go:138] Found kubelet problem: Aug 19 12:25:21 old-k8s-version-091610 kubelet[673]: E0819 12:25:21.769596     673 pod_workers.go:191] Error syncing pod 67d879d9-c2b3-4d91-8855-4f6007f01c6e ("dashboard-metrics-scraper-8d5bb5db8-kgs2g_kubernetes-dashboard(67d879d9-c2b3-4d91-8855-4f6007f01c6e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kgs2g_kubernetes-dashboard(67d879d9-c2b3-4d91-8855-4f6007f01c6e)"
	I0819 12:25:26.558257  501046 logs.go:123] Gathering logs for etcd [f96bb26d0d9fd54167ea4eedbda32851e7e6ed986c5d18edddbfb9d015c80aa8] ...
	I0819 12:25:26.558273  501046 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f96bb26d0d9fd54167ea4eedbda32851e7e6ed986c5d18edddbfb9d015c80aa8"
	I0819 12:25:26.610680  501046 logs.go:123] Gathering logs for kube-proxy [8581310ffb6da62d730c00416e9d418c1fd194d0459e551c98e90cd0193dc9be] ...
	I0819 12:25:26.610707  501046 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8581310ffb6da62d730c00416e9d418c1fd194d0459e551c98e90cd0193dc9be"
	I0819 12:25:26.659447  501046 logs.go:123] Gathering logs for kube-proxy [495863fa417577517a6659a9363e132d473fde25375c55ba292884732c5b5cc6] ...
	I0819 12:25:26.659473  501046 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 495863fa417577517a6659a9363e132d473fde25375c55ba292884732c5b5cc6"
	I0819 12:25:26.700658  501046 logs.go:123] Gathering logs for storage-provisioner [d6ba97b27a6fcb89c2d05f135c98e06be837786617de7736914f6711ab33c929] ...
	I0819 12:25:26.700685  501046 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d6ba97b27a6fcb89c2d05f135c98e06be837786617de7736914f6711ab33c929"
	I0819 12:25:26.743391  501046 logs.go:123] Gathering logs for storage-provisioner [7a63c07299e71c1920b148c4c2cd68ce0fc64d5359eb08ebc374e073275d266b] ...
	I0819 12:25:26.743418  501046 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7a63c07299e71c1920b148c4c2cd68ce0fc64d5359eb08ebc374e073275d266b"
	I0819 12:25:26.791267  501046 logs.go:123] Gathering logs for containerd ...
	I0819 12:25:26.791299  501046 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0819 12:25:26.849932  501046 logs.go:123] Gathering logs for container status ...
	I0819 12:25:26.849967  501046 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 12:25:26.906316  501046 out.go:358] Setting ErrFile to fd 2...
	I0819 12:25:26.906346  501046 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0819 12:25:26.906392  501046 out.go:270] X Problems detected in kubelet:
	W0819 12:25:26.906427  501046 out.go:270]   Aug 19 12:24:52 old-k8s-version-091610 kubelet[673]: E0819 12:24:52.769598     673 pod_workers.go:191] Error syncing pod 67d879d9-c2b3-4d91-8855-4f6007f01c6e ("dashboard-metrics-scraper-8d5bb5db8-kgs2g_kubernetes-dashboard(67d879d9-c2b3-4d91-8855-4f6007f01c6e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kgs2g_kubernetes-dashboard(67d879d9-c2b3-4d91-8855-4f6007f01c6e)"
	W0819 12:25:26.906433  501046 out.go:270]   Aug 19 12:25:02 old-k8s-version-091610 kubelet[673]: E0819 12:25:02.770111     673 pod_workers.go:191] Error syncing pod 4085e2df-7e89-44a8-b234-c4b001bdff1d ("metrics-server-9975d5f86-zb7nt_kube-system(4085e2df-7e89-44a8-b234-c4b001bdff1d)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0819 12:25:26.906450  501046 out.go:270]   Aug 19 12:25:07 old-k8s-version-091610 kubelet[673]: E0819 12:25:07.769696     673 pod_workers.go:191] Error syncing pod 67d879d9-c2b3-4d91-8855-4f6007f01c6e ("dashboard-metrics-scraper-8d5bb5db8-kgs2g_kubernetes-dashboard(67d879d9-c2b3-4d91-8855-4f6007f01c6e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kgs2g_kubernetes-dashboard(67d879d9-c2b3-4d91-8855-4f6007f01c6e)"
	W0819 12:25:26.906457  501046 out.go:270]   Aug 19 12:25:16 old-k8s-version-091610 kubelet[673]: E0819 12:25:16.770077     673 pod_workers.go:191] Error syncing pod 4085e2df-7e89-44a8-b234-c4b001bdff1d ("metrics-server-9975d5f86-zb7nt_kube-system(4085e2df-7e89-44a8-b234-c4b001bdff1d)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0819 12:25:26.906462  501046 out.go:270]   Aug 19 12:25:21 old-k8s-version-091610 kubelet[673]: E0819 12:25:21.769596     673 pod_workers.go:191] Error syncing pod 67d879d9-c2b3-4d91-8855-4f6007f01c6e ("dashboard-metrics-scraper-8d5bb5db8-kgs2g_kubernetes-dashboard(67d879d9-c2b3-4d91-8855-4f6007f01c6e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kgs2g_kubernetes-dashboard(67d879d9-c2b3-4d91-8855-4f6007f01c6e)"
	I0819 12:25:26.906474  501046 out.go:358] Setting ErrFile to fd 2...
	I0819 12:25:26.906482  501046 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 12:25:31.199546  506775 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 12:25:31.212299  506775 api_server.go:72] duration metric: took 4m9.459665157s to wait for apiserver process to appear ...
	I0819 12:25:31.212373  506775 api_server.go:88] waiting for apiserver healthz status ...
	I0819 12:25:31.212416  506775 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0819 12:25:31.212486  506775 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0819 12:25:31.252775  506775 cri.go:89] found id: "8bbf4db80884ca68121db5cd324106134f008d55ceebaa2d085ff9a78d6bc836"
	I0819 12:25:31.252804  506775 cri.go:89] found id: "3adc6cce17e637725838624966046f6786a930b68fc69c98675834705549597f"
	I0819 12:25:31.252812  506775 cri.go:89] found id: ""
	I0819 12:25:31.252819  506775 logs.go:276] 2 containers: [8bbf4db80884ca68121db5cd324106134f008d55ceebaa2d085ff9a78d6bc836 3adc6cce17e637725838624966046f6786a930b68fc69c98675834705549597f]
	I0819 12:25:31.252968  506775 ssh_runner.go:195] Run: which crictl
	I0819 12:25:31.256996  506775 ssh_runner.go:195] Run: which crictl
	I0819 12:25:31.264428  506775 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0819 12:25:31.264535  506775 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0819 12:25:31.307725  506775 cri.go:89] found id: "71b136bcd944e3e244b3617b908c4b8ce62ccc5992841a4ced5ad9956939118c"
	I0819 12:25:31.307750  506775 cri.go:89] found id: "1221ed0d57efc908de20b1a0bfc703ea95699d6db4f60b5d2d2fc087d37714bf"
	I0819 12:25:31.307755  506775 cri.go:89] found id: ""
	I0819 12:25:31.307763  506775 logs.go:276] 2 containers: [71b136bcd944e3e244b3617b908c4b8ce62ccc5992841a4ced5ad9956939118c 1221ed0d57efc908de20b1a0bfc703ea95699d6db4f60b5d2d2fc087d37714bf]
	I0819 12:25:31.307821  506775 ssh_runner.go:195] Run: which crictl
	I0819 12:25:31.311944  506775 ssh_runner.go:195] Run: which crictl
	I0819 12:25:31.315418  506775 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0819 12:25:31.315492  506775 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0819 12:25:31.368995  506775 cri.go:89] found id: "4175b9236449c47bc76b55d1b66e0a2f5eb6a494d652e0d61e523de3c052979a"
	I0819 12:25:31.369074  506775 cri.go:89] found id: "124128cbd1e5d2b757d30236510a2f7775b5f133017bdb53fc286fa7673de17c"
	I0819 12:25:31.369095  506775 cri.go:89] found id: ""
	I0819 12:25:31.369109  506775 logs.go:276] 2 containers: [4175b9236449c47bc76b55d1b66e0a2f5eb6a494d652e0d61e523de3c052979a 124128cbd1e5d2b757d30236510a2f7775b5f133017bdb53fc286fa7673de17c]
	I0819 12:25:31.369180  506775 ssh_runner.go:195] Run: which crictl
	I0819 12:25:31.373065  506775 ssh_runner.go:195] Run: which crictl
	I0819 12:25:31.376673  506775 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0819 12:25:31.376760  506775 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0819 12:25:31.437059  506775 cri.go:89] found id: "6baa26f8742a8686343596d2fa2f37869dfdf0f779921454b0f514549aad19c8"
	I0819 12:25:31.437080  506775 cri.go:89] found id: "3f438763d4391ba5c2045a523f101f3454bd109ac70f3dd614d83dc21b9100d7"
	I0819 12:25:31.437085  506775 cri.go:89] found id: ""
	I0819 12:25:31.437093  506775 logs.go:276] 2 containers: [6baa26f8742a8686343596d2fa2f37869dfdf0f779921454b0f514549aad19c8 3f438763d4391ba5c2045a523f101f3454bd109ac70f3dd614d83dc21b9100d7]
	I0819 12:25:31.437148  506775 ssh_runner.go:195] Run: which crictl
	I0819 12:25:31.440732  506775 ssh_runner.go:195] Run: which crictl
	I0819 12:25:31.444357  506775 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0819 12:25:31.444446  506775 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0819 12:25:31.487315  506775 cri.go:89] found id: "16543fc65a1be3182fec4ab65415b08c4149b41f875d21e7e70c4d3f9c5035e4"
	I0819 12:25:31.487380  506775 cri.go:89] found id: "0bb7433777c470db1fa11e0481317ae603e85048c1e3b61a8cb0df0509de2f96"
	I0819 12:25:31.487399  506775 cri.go:89] found id: ""
	I0819 12:25:31.487421  506775 logs.go:276] 2 containers: [16543fc65a1be3182fec4ab65415b08c4149b41f875d21e7e70c4d3f9c5035e4 0bb7433777c470db1fa11e0481317ae603e85048c1e3b61a8cb0df0509de2f96]
	I0819 12:25:31.487490  506775 ssh_runner.go:195] Run: which crictl
	I0819 12:25:31.491432  506775 ssh_runner.go:195] Run: which crictl
	I0819 12:25:31.501380  506775 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0819 12:25:31.501469  506775 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0819 12:25:31.544703  506775 cri.go:89] found id: "db269d4aaac2f76ebe4d954bbc95bc91cd0db32b72b8cc6710bfea0916a990c8"
	I0819 12:25:31.544772  506775 cri.go:89] found id: "80421881617fc159def3bb11f59a8c750e63de334013cda4ebb677a69409f401"
	I0819 12:25:31.544783  506775 cri.go:89] found id: ""
	I0819 12:25:31.544792  506775 logs.go:276] 2 containers: [db269d4aaac2f76ebe4d954bbc95bc91cd0db32b72b8cc6710bfea0916a990c8 80421881617fc159def3bb11f59a8c750e63de334013cda4ebb677a69409f401]
	I0819 12:25:31.544851  506775 ssh_runner.go:195] Run: which crictl
	I0819 12:25:31.548524  506775 ssh_runner.go:195] Run: which crictl
	I0819 12:25:31.552025  506775 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0819 12:25:31.552149  506775 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0819 12:25:31.604026  506775 cri.go:89] found id: "49333be0a29a6e8ff2271ac54c2ce23bcf06d3d198767a64a31ac123da942f2a"
	I0819 12:25:31.604048  506775 cri.go:89] found id: "597e793835a348134dae58cb28cbf8c7c5afb6b55ab77c57a898610b0036ab29"
	I0819 12:25:31.604052  506775 cri.go:89] found id: ""
	I0819 12:25:31.604060  506775 logs.go:276] 2 containers: [49333be0a29a6e8ff2271ac54c2ce23bcf06d3d198767a64a31ac123da942f2a 597e793835a348134dae58cb28cbf8c7c5afb6b55ab77c57a898610b0036ab29]
	I0819 12:25:31.604114  506775 ssh_runner.go:195] Run: which crictl
	I0819 12:25:31.607751  506775 ssh_runner.go:195] Run: which crictl
	I0819 12:25:31.611280  506775 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0819 12:25:31.611369  506775 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0819 12:25:31.654386  506775 cri.go:89] found id: "408674f5d24a3f0915a71e88dbefe58dbeadf401736b664e699d803b2bfad6e6"
	I0819 12:25:31.654408  506775 cri.go:89] found id: ""
	I0819 12:25:31.654415  506775 logs.go:276] 1 containers: [408674f5d24a3f0915a71e88dbefe58dbeadf401736b664e699d803b2bfad6e6]
	I0819 12:25:31.654471  506775 ssh_runner.go:195] Run: which crictl
	I0819 12:25:31.658726  506775 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0819 12:25:31.658796  506775 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0819 12:25:31.702053  506775 cri.go:89] found id: "cc0e89380ed7e7194ad6d7e8e1bb072d2ee34ed9e8621cc957f55e8a88be96d6"
	I0819 12:25:31.702077  506775 cri.go:89] found id: "c9b68086ed65f933fa0c022f76cfa10a03814df0ff78fc874b3e42cba636fc7b"
	I0819 12:25:31.702082  506775 cri.go:89] found id: ""
	I0819 12:25:31.702090  506775 logs.go:276] 2 containers: [cc0e89380ed7e7194ad6d7e8e1bb072d2ee34ed9e8621cc957f55e8a88be96d6 c9b68086ed65f933fa0c022f76cfa10a03814df0ff78fc874b3e42cba636fc7b]
	I0819 12:25:31.702146  506775 ssh_runner.go:195] Run: which crictl
	I0819 12:25:31.705893  506775 ssh_runner.go:195] Run: which crictl
	I0819 12:25:31.709446  506775 logs.go:123] Gathering logs for dmesg ...
	I0819 12:25:31.709509  506775 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0819 12:25:31.727750  506775 logs.go:123] Gathering logs for etcd [71b136bcd944e3e244b3617b908c4b8ce62ccc5992841a4ced5ad9956939118c] ...
	I0819 12:25:31.727850  506775 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 71b136bcd944e3e244b3617b908c4b8ce62ccc5992841a4ced5ad9956939118c"
	I0819 12:25:31.799044  506775 logs.go:123] Gathering logs for etcd [1221ed0d57efc908de20b1a0bfc703ea95699d6db4f60b5d2d2fc087d37714bf] ...
	I0819 12:25:31.799119  506775 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1221ed0d57efc908de20b1a0bfc703ea95699d6db4f60b5d2d2fc087d37714bf"
	I0819 12:25:31.854685  506775 logs.go:123] Gathering logs for kube-proxy [0bb7433777c470db1fa11e0481317ae603e85048c1e3b61a8cb0df0509de2f96] ...
	I0819 12:25:31.854767  506775 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0bb7433777c470db1fa11e0481317ae603e85048c1e3b61a8cb0df0509de2f96"
	I0819 12:25:31.902643  506775 logs.go:123] Gathering logs for kubelet ...
	I0819 12:25:31.902712  506775 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0819 12:25:31.981557  506775 logs.go:123] Gathering logs for coredns [4175b9236449c47bc76b55d1b66e0a2f5eb6a494d652e0d61e523de3c052979a] ...
	I0819 12:25:31.981592  506775 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4175b9236449c47bc76b55d1b66e0a2f5eb6a494d652e0d61e523de3c052979a"
	I0819 12:25:32.025948  506775 logs.go:123] Gathering logs for coredns [124128cbd1e5d2b757d30236510a2f7775b5f133017bdb53fc286fa7673de17c] ...
	I0819 12:25:32.025978  506775 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 124128cbd1e5d2b757d30236510a2f7775b5f133017bdb53fc286fa7673de17c"
	I0819 12:25:32.091005  506775 logs.go:123] Gathering logs for kube-scheduler [3f438763d4391ba5c2045a523f101f3454bd109ac70f3dd614d83dc21b9100d7] ...
	I0819 12:25:32.091033  506775 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3f438763d4391ba5c2045a523f101f3454bd109ac70f3dd614d83dc21b9100d7"
	I0819 12:25:32.167711  506775 logs.go:123] Gathering logs for kube-proxy [16543fc65a1be3182fec4ab65415b08c4149b41f875d21e7e70c4d3f9c5035e4] ...
	I0819 12:25:32.167743  506775 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 16543fc65a1be3182fec4ab65415b08c4149b41f875d21e7e70c4d3f9c5035e4"
	I0819 12:25:32.210458  506775 logs.go:123] Gathering logs for kubernetes-dashboard [408674f5d24a3f0915a71e88dbefe58dbeadf401736b664e699d803b2bfad6e6] ...
	I0819 12:25:32.210486  506775 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 408674f5d24a3f0915a71e88dbefe58dbeadf401736b664e699d803b2bfad6e6"
	I0819 12:25:32.256823  506775 logs.go:123] Gathering logs for storage-provisioner [c9b68086ed65f933fa0c022f76cfa10a03814df0ff78fc874b3e42cba636fc7b] ...
	I0819 12:25:32.256851  506775 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c9b68086ed65f933fa0c022f76cfa10a03814df0ff78fc874b3e42cba636fc7b"
	I0819 12:25:32.303055  506775 logs.go:123] Gathering logs for container status ...
	I0819 12:25:32.303083  506775 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0819 12:25:32.358733  506775 logs.go:123] Gathering logs for kube-apiserver [8bbf4db80884ca68121db5cd324106134f008d55ceebaa2d085ff9a78d6bc836] ...
	I0819 12:25:32.358766  506775 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8bbf4db80884ca68121db5cd324106134f008d55ceebaa2d085ff9a78d6bc836"
	I0819 12:25:32.416790  506775 logs.go:123] Gathering logs for kube-scheduler [6baa26f8742a8686343596d2fa2f37869dfdf0f779921454b0f514549aad19c8] ...
	I0819 12:25:32.416828  506775 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6baa26f8742a8686343596d2fa2f37869dfdf0f779921454b0f514549aad19c8"
	I0819 12:25:32.470687  506775 logs.go:123] Gathering logs for kube-controller-manager [db269d4aaac2f76ebe4d954bbc95bc91cd0db32b72b8cc6710bfea0916a990c8] ...
	I0819 12:25:32.470716  506775 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 db269d4aaac2f76ebe4d954bbc95bc91cd0db32b72b8cc6710bfea0916a990c8"
	I0819 12:25:32.532938  506775 logs.go:123] Gathering logs for kube-controller-manager [80421881617fc159def3bb11f59a8c750e63de334013cda4ebb677a69409f401] ...
	I0819 12:25:32.532992  506775 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 80421881617fc159def3bb11f59a8c750e63de334013cda4ebb677a69409f401"
	I0819 12:25:32.603991  506775 logs.go:123] Gathering logs for kindnet [49333be0a29a6e8ff2271ac54c2ce23bcf06d3d198767a64a31ac123da942f2a] ...
	I0819 12:25:32.604025  506775 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 49333be0a29a6e8ff2271ac54c2ce23bcf06d3d198767a64a31ac123da942f2a"
	I0819 12:25:32.676928  506775 logs.go:123] Gathering logs for kindnet [597e793835a348134dae58cb28cbf8c7c5afb6b55ab77c57a898610b0036ab29] ...
	I0819 12:25:32.677003  506775 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 597e793835a348134dae58cb28cbf8c7c5afb6b55ab77c57a898610b0036ab29"
	I0819 12:25:32.727944  506775 logs.go:123] Gathering logs for storage-provisioner [cc0e89380ed7e7194ad6d7e8e1bb072d2ee34ed9e8621cc957f55e8a88be96d6] ...
	I0819 12:25:32.728027  506775 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cc0e89380ed7e7194ad6d7e8e1bb072d2ee34ed9e8621cc957f55e8a88be96d6"
	I0819 12:25:32.786838  506775 logs.go:123] Gathering logs for containerd ...
	I0819 12:25:32.786866  506775 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0819 12:25:32.858654  506775 logs.go:123] Gathering logs for kube-apiserver [3adc6cce17e637725838624966046f6786a930b68fc69c98675834705549597f] ...
	I0819 12:25:32.858698  506775 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3adc6cce17e637725838624966046f6786a930b68fc69c98675834705549597f"
	I0819 12:25:32.922605  506775 logs.go:123] Gathering logs for describe nodes ...
	I0819 12:25:32.922634  506775 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0819 12:25:36.908448  501046 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I0819 12:25:36.926940  501046 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I0819 12:25:36.928948  501046 out.go:201] 
	W0819 12:25:36.930847  501046 out.go:270] X Exiting due to K8S_UNHEALTHY_CONTROL_PLANE: wait 6m0s for node: wait for healthy API server: controlPlane never updated to v1.20.0
	W0819 12:25:36.930990  501046 out.go:270] * Suggestion: Control Plane could not update, try minikube delete --all --purge
	W0819 12:25:36.931063  501046 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/11417
	W0819 12:25:36.931093  501046 out.go:270] * 
	W0819 12:25:36.932292  501046 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0819 12:25:36.935172  501046 out.go:201] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                        ATTEMPT             POD ID              POD
	420c77d0fb766       523cad1a4df73       2 minutes ago       Exited              dashboard-metrics-scraper   5                   d8b5958ad06b8       dashboard-metrics-scraper-8d5bb5db8-kgs2g
	e333f18f594f1       20b332c9a70d8       5 minutes ago       Running             kubernetes-dashboard        0                   2d0190302d935       kubernetes-dashboard-cd95d586-vl2zf
	83f278d29d2bd       1611cd07b61d5       5 minutes ago       Running             busybox                     1                   96ccaa35f7564       busybox
	d6ba97b27a6fc       ba04bb24b9575       5 minutes ago       Running             storage-provisioner         1                   7338604ef76f9       storage-provisioner
	8581310ffb6da       25a5233254979       5 minutes ago       Running             kube-proxy                  1                   ba1411a04587c       kube-proxy-g2lvm
	312b3b2145bf1       6a23fa8fd2b78       5 minutes ago       Running             kindnet-cni                 1                   734eb42796242       kindnet-d6xbs
	a9d449177d2f2       db91994f4ee8f       5 minutes ago       Running             coredns                     1                   94eb725ea9714       coredns-74ff55c5b-fgk64
	4ce61d87754c1       2c08bbbc02d3a       5 minutes ago       Running             kube-apiserver              1                   96db743eac85a       kube-apiserver-old-k8s-version-091610
	309ceea1b6362       e7605f88f17d6       5 minutes ago       Running             kube-scheduler              1                   0f43af4b644d6       kube-scheduler-old-k8s-version-091610
	f2b278acf70fb       05b738aa1bc63       5 minutes ago       Running             etcd                        1                   deb8a70d10938       etcd-old-k8s-version-091610
	ff22b0055b8eb       1df8a2b116bd1       5 minutes ago       Running             kube-controller-manager     1                   5e220bd076872       kube-controller-manager-old-k8s-version-091610
	5e11476d592f7       1611cd07b61d5       6 minutes ago       Exited              busybox                     0                   171f48ef9a6d0       busybox
	52772681d7f9b       db91994f4ee8f       7 minutes ago       Exited              coredns                     0                   b907b06638d3b       coredns-74ff55c5b-fgk64
	ff792901aeab0       6a23fa8fd2b78       7 minutes ago       Exited              kindnet-cni                 0                   c28f420138e71       kindnet-d6xbs
	7a63c07299e71       ba04bb24b9575       7 minutes ago       Exited              storage-provisioner         0                   c345e30727eb0       storage-provisioner
	495863fa41757       25a5233254979       8 minutes ago       Exited              kube-proxy                  0                   396055caac551       kube-proxy-g2lvm
	b8c6ba6c65d67       1df8a2b116bd1       8 minutes ago       Exited              kube-controller-manager     0                   4898994ce5826       kube-controller-manager-old-k8s-version-091610
	f96bb26d0d9fd       05b738aa1bc63       8 minutes ago       Exited              etcd                        0                   e2e7db7b600d5       etcd-old-k8s-version-091610
	448906379c25a       2c08bbbc02d3a       8 minutes ago       Exited              kube-apiserver              0                   a19e8f2802ba4       kube-apiserver-old-k8s-version-091610
	1ef419f5f0679       e7605f88f17d6       8 minutes ago       Exited              kube-scheduler              0                   5e5b4a81d7e74       kube-scheduler-old-k8s-version-091610
	
	
	==> containerd <==
	Aug 19 12:21:47 old-k8s-version-091610 containerd[573]: time="2024-08-19T12:21:47.800843059Z" level=info msg="CreateContainer within sandbox \"d8b5958ad06b8adc98c9f4485843ac401340f2e9e66ddb5bbca46dcfa0204cfe\" for name:\"dashboard-metrics-scraper\" attempt:4 returns container id \"64138d34d666e197736d55f1305d2f6fc006fee649f64ce55abf869c1f8cfb1f\""
	Aug 19 12:21:47 old-k8s-version-091610 containerd[573]: time="2024-08-19T12:21:47.801528541Z" level=info msg="StartContainer for \"64138d34d666e197736d55f1305d2f6fc006fee649f64ce55abf869c1f8cfb1f\""
	Aug 19 12:21:47 old-k8s-version-091610 containerd[573]: time="2024-08-19T12:21:47.868780285Z" level=info msg="StartContainer for \"64138d34d666e197736d55f1305d2f6fc006fee649f64ce55abf869c1f8cfb1f\" returns successfully"
	Aug 19 12:21:47 old-k8s-version-091610 containerd[573]: time="2024-08-19T12:21:47.997947326Z" level=info msg="shim disconnected" id=64138d34d666e197736d55f1305d2f6fc006fee649f64ce55abf869c1f8cfb1f namespace=k8s.io
	Aug 19 12:21:47 old-k8s-version-091610 containerd[573]: time="2024-08-19T12:21:47.998013860Z" level=warning msg="cleaning up after shim disconnected" id=64138d34d666e197736d55f1305d2f6fc006fee649f64ce55abf869c1f8cfb1f namespace=k8s.io
	Aug 19 12:21:47 old-k8s-version-091610 containerd[573]: time="2024-08-19T12:21:47.998024486Z" level=info msg="cleaning up dead shim" namespace=k8s.io
	Aug 19 12:21:48 old-k8s-version-091610 containerd[573]: time="2024-08-19T12:21:48.496785974Z" level=info msg="RemoveContainer for \"0960e66843dea7c2344d7d43d01bd46e97a8a4ac2601251b32174b7949908350\""
	Aug 19 12:21:48 old-k8s-version-091610 containerd[573]: time="2024-08-19T12:21:48.513028468Z" level=info msg="RemoveContainer for \"0960e66843dea7c2344d7d43d01bd46e97a8a4ac2601251b32174b7949908350\" returns successfully"
	Aug 19 12:22:48 old-k8s-version-091610 containerd[573]: time="2024-08-19T12:22:48.771684591Z" level=info msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Aug 19 12:22:48 old-k8s-version-091610 containerd[573]: time="2024-08-19T12:22:48.778435408Z" level=info msg="trying next host" error="failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host" host=fake.domain
	Aug 19 12:22:48 old-k8s-version-091610 containerd[573]: time="2024-08-19T12:22:48.779969291Z" level=error msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\" failed" error="failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	Aug 19 12:22:48 old-k8s-version-091610 containerd[573]: time="2024-08-19T12:22:48.780010832Z" level=info msg="stop pulling image fake.domain/registry.k8s.io/echoserver:1.4: active requests=0, bytes read=0"
	Aug 19 12:23:21 old-k8s-version-091610 containerd[573]: time="2024-08-19T12:23:21.772405427Z" level=info msg="CreateContainer within sandbox \"d8b5958ad06b8adc98c9f4485843ac401340f2e9e66ddb5bbca46dcfa0204cfe\" for container name:\"dashboard-metrics-scraper\" attempt:5"
	Aug 19 12:23:21 old-k8s-version-091610 containerd[573]: time="2024-08-19T12:23:21.787605521Z" level=info msg="CreateContainer within sandbox \"d8b5958ad06b8adc98c9f4485843ac401340f2e9e66ddb5bbca46dcfa0204cfe\" for name:\"dashboard-metrics-scraper\" attempt:5 returns container id \"420c77d0fb766d92374f469eb83159e0d4649bc9d95c8a015b2d41133f73322e\""
	Aug 19 12:23:21 old-k8s-version-091610 containerd[573]: time="2024-08-19T12:23:21.788412675Z" level=info msg="StartContainer for \"420c77d0fb766d92374f469eb83159e0d4649bc9d95c8a015b2d41133f73322e\""
	Aug 19 12:23:21 old-k8s-version-091610 containerd[573]: time="2024-08-19T12:23:21.865084681Z" level=info msg="StartContainer for \"420c77d0fb766d92374f469eb83159e0d4649bc9d95c8a015b2d41133f73322e\" returns successfully"
	Aug 19 12:23:21 old-k8s-version-091610 containerd[573]: time="2024-08-19T12:23:21.892751464Z" level=info msg="shim disconnected" id=420c77d0fb766d92374f469eb83159e0d4649bc9d95c8a015b2d41133f73322e namespace=k8s.io
	Aug 19 12:23:21 old-k8s-version-091610 containerd[573]: time="2024-08-19T12:23:21.892808210Z" level=warning msg="cleaning up after shim disconnected" id=420c77d0fb766d92374f469eb83159e0d4649bc9d95c8a015b2d41133f73322e namespace=k8s.io
	Aug 19 12:23:21 old-k8s-version-091610 containerd[573]: time="2024-08-19T12:23:21.892856242Z" level=info msg="cleaning up dead shim" namespace=k8s.io
	Aug 19 12:23:22 old-k8s-version-091610 containerd[573]: time="2024-08-19T12:23:22.735211284Z" level=info msg="RemoveContainer for \"64138d34d666e197736d55f1305d2f6fc006fee649f64ce55abf869c1f8cfb1f\""
	Aug 19 12:23:22 old-k8s-version-091610 containerd[573]: time="2024-08-19T12:23:22.740257574Z" level=info msg="RemoveContainer for \"64138d34d666e197736d55f1305d2f6fc006fee649f64ce55abf869c1f8cfb1f\" returns successfully"
	Aug 19 12:25:29 old-k8s-version-091610 containerd[573]: time="2024-08-19T12:25:29.770476742Z" level=info msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Aug 19 12:25:29 old-k8s-version-091610 containerd[573]: time="2024-08-19T12:25:29.778504099Z" level=info msg="trying next host" error="failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host" host=fake.domain
	Aug 19 12:25:29 old-k8s-version-091610 containerd[573]: time="2024-08-19T12:25:29.779997876Z" level=error msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\" failed" error="failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	Aug 19 12:25:29 old-k8s-version-091610 containerd[573]: time="2024-08-19T12:25:29.780040821Z" level=info msg="stop pulling image fake.domain/registry.k8s.io/echoserver:1.4: active requests=0, bytes read=0"
	
	
	==> coredns [52772681d7f9b14520c48d8c42d715a82c430226dad2db97bca20cde5180797d] <==
	.:53
	[INFO] plugin/reload: Running configuration MD5 = 093a0bf1423dd8c4eee62372bb216168
	CoreDNS-1.7.0
	linux/arm64, go1.14.4, f59c03d
	[INFO] 127.0.0.1:54311 - 53760 "HINFO IN 6814207175258724651.1475505951579655250. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.041467732s
	
	
	==> coredns [a9d449177d2f20bb4e0279df0064dc677dffa2194c0bfd6deb8af6688e466210] <==
	I0819 12:20:21.094435       1 trace.go:116] Trace[2019727887]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125 (started: 2024-08-19 12:19:51.093675555 +0000 UTC m=+0.023792489) (total time: 30.000638094s):
	Trace[2019727887]: [30.000638094s] [30.000638094s] END
	E0819 12:20:21.094470       1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	I0819 12:20:21.095149       1 trace.go:116] Trace[939984059]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125 (started: 2024-08-19 12:19:51.094734042 +0000 UTC m=+0.024850976) (total time: 30.000381788s):
	Trace[939984059]: [30.000381788s] [30.000381788s] END
	E0819 12:20:21.095165       1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	I0819 12:20:21.095817       1 trace.go:116] Trace[1474941318]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125 (started: 2024-08-19 12:19:51.095179693 +0000 UTC m=+0.025296619) (total time: 30.000615957s):
	Trace[1474941318]: [30.000615957s] [30.000615957s] END
	E0819 12:20:21.095839       1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Endpoints: Get "https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	.:53
	[INFO] plugin/reload: Running configuration MD5 = 093a0bf1423dd8c4eee62372bb216168
	CoreDNS-1.7.0
	linux/arm64, go1.14.4, f59c03d
	[INFO] 127.0.0.1:56219 - 41042 "HINFO IN 1549296721935357714.7162915720252293703. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.011677738s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               old-k8s-version-091610
	Roles:              control-plane,master
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=old-k8s-version-091610
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=7871dd89d2a8218fd3bbcc542b116f963c0d9934
	                    minikube.k8s.io/name=old-k8s-version-091610
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_19T12_17_22_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node-role.kubernetes.io/master=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 19 Aug 2024 12:17:18 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-091610
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 19 Aug 2024 12:25:31 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 19 Aug 2024 12:20:49 +0000   Mon, 19 Aug 2024 12:17:12 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 19 Aug 2024 12:20:49 +0000   Mon, 19 Aug 2024 12:17:12 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 19 Aug 2024 12:20:49 +0000   Mon, 19 Aug 2024 12:17:12 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 19 Aug 2024 12:20:49 +0000   Mon, 19 Aug 2024 12:17:37 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    old-k8s-version-091610
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022364Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022364Ki
	  pods:               110
	System Info:
	  Machine ID:                 acb2a72da43f4832a91e6c484d720804
	  System UUID:                05f7a8a2-b8eb-4b55-a169-aa3e05dbaf49
	  Boot ID:                    e46e48f2-e1cc-40c1-bc17-f5e6b67a31cd
	  Kernel Version:             5.15.0-1067-aws
	  OS Image:                   Ubuntu 22.04.4 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  containerd://1.7.20
	  Kubelet Version:            v1.20.0
	  Kube-Proxy Version:         v1.20.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m37s
	  kube-system                 coredns-74ff55c5b-fgk64                           100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     8m1s
	  kube-system                 etcd-old-k8s-version-091610                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         8m8s
	  kube-system                 kindnet-d6xbs                                     100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      8m1s
	  kube-system                 kube-apiserver-old-k8s-version-091610             250m (12%)    0 (0%)      0 (0%)           0 (0%)         8m8s
	  kube-system                 kube-controller-manager-old-k8s-version-091610    200m (10%)    0 (0%)      0 (0%)           0 (0%)         8m8s
	  kube-system                 kube-proxy-g2lvm                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m1s
	  kube-system                 kube-scheduler-old-k8s-version-091610             100m (5%)     0 (0%)      0 (0%)           0 (0%)         8m8s
	  kube-system                 metrics-server-9975d5f86-zb7nt                    100m (5%)     0 (0%)      200Mi (2%)       0 (0%)         6m26s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m
	  kubernetes-dashboard        dashboard-metrics-scraper-8d5bb5db8-kgs2g         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m32s
	  kubernetes-dashboard        kubernetes-dashboard-cd95d586-vl2zf               0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m32s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (47%)  100m (5%)
	  memory             420Mi (5%)  220Mi (2%)
	  ephemeral-storage  100Mi (0%)  0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From        Message
	  ----    ------                   ----                   ----        -------
	  Normal  NodeHasSufficientMemory  8m27s (x5 over 8m27s)  kubelet     Node old-k8s-version-091610 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m27s (x5 over 8m27s)  kubelet     Node old-k8s-version-091610 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m27s (x4 over 8m27s)  kubelet     Node old-k8s-version-091610 status is now: NodeHasSufficientPID
	  Normal  Starting                 8m8s                   kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  8m8s                   kubelet     Node old-k8s-version-091610 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m8s                   kubelet     Node old-k8s-version-091610 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m8s                   kubelet     Node old-k8s-version-091610 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  8m8s                   kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                8m1s                   kubelet     Node old-k8s-version-091610 status is now: NodeReady
	  Normal  Starting                 8m                     kube-proxy  Starting kube-proxy.
	  Normal  Starting                 5m58s                  kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  5m58s (x8 over 5m58s)  kubelet     Node old-k8s-version-091610 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m58s (x7 over 5m58s)  kubelet     Node old-k8s-version-091610 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m58s (x8 over 5m58s)  kubelet     Node old-k8s-version-091610 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m58s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  Starting                 5m47s                  kube-proxy  Starting kube-proxy.
	
	
	==> dmesg <==
	[Aug19 11:00] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	
	
	==> etcd [f2b278acf70fb649b3b13d726b38e5b951b7950e84be6176979a50e06c284635] <==
	2024-08-19 12:21:34.903406 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-08-19 12:21:44.903320 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-08-19 12:21:54.903549 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-08-19 12:22:04.903377 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-08-19 12:22:14.903533 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-08-19 12:22:24.903601 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-08-19 12:22:34.903250 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-08-19 12:22:44.903406 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-08-19 12:22:54.903468 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-08-19 12:23:04.903643 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-08-19 12:23:14.903608 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-08-19 12:23:24.903548 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-08-19 12:23:34.903473 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-08-19 12:23:44.903572 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-08-19 12:23:54.903715 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-08-19 12:24:04.903466 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-08-19 12:24:14.903487 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-08-19 12:24:24.903409 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-08-19 12:24:34.903416 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-08-19 12:24:44.903395 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-08-19 12:24:54.903550 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-08-19 12:25:04.903486 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-08-19 12:25:14.903354 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-08-19 12:25:24.903549 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-08-19 12:25:34.903446 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	
	
	==> etcd [f96bb26d0d9fd54167ea4eedbda32851e7e6ed986c5d18edddbfb9d015c80aa8] <==
	raft2024/08/19 12:17:12 INFO: 9f0758e1c58a86ed is starting a new election at term 1
	raft2024/08/19 12:17:12 INFO: 9f0758e1c58a86ed became candidate at term 2
	raft2024/08/19 12:17:12 INFO: 9f0758e1c58a86ed received MsgVoteResp from 9f0758e1c58a86ed at term 2
	raft2024/08/19 12:17:12 INFO: 9f0758e1c58a86ed became leader at term 2
	raft2024/08/19 12:17:12 INFO: raft.node: 9f0758e1c58a86ed elected leader 9f0758e1c58a86ed at term 2
	2024-08-19 12:17:12.277977 I | etcdserver: setting up the initial cluster version to 3.4
	2024-08-19 12:17:12.285204 N | etcdserver/membership: set the initial cluster version to 3.4
	2024-08-19 12:17:12.285367 I | etcdserver/api: enabled capabilities for version 3.4
	2024-08-19 12:17:12.285488 I | etcdserver: published {Name:old-k8s-version-091610 ClientURLs:[https://192.168.85.2:2379]} to cluster 68eaea490fab4e05
	2024-08-19 12:17:12.285682 I | embed: ready to serve client requests
	2024-08-19 12:17:12.285854 I | embed: ready to serve client requests
	2024-08-19 12:17:12.287444 I | embed: serving client requests on 192.168.85.2:2379
	2024-08-19 12:17:12.327423 I | embed: serving client requests on 127.0.0.1:2379
	2024-08-19 12:17:31.364699 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-08-19 12:17:31.905937 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-08-19 12:17:41.906055 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-08-19 12:17:51.906370 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-08-19 12:18:01.906640 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-08-19 12:18:11.905994 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-08-19 12:18:21.906024 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-08-19 12:18:31.905967 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-08-19 12:18:41.906171 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-08-19 12:18:51.906220 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-08-19 12:19:01.906267 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-08-19 12:19:11.906237 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	
	
	==> kernel <==
	 12:25:38 up  2:08,  0 users,  load average: 0.64, 1.87, 2.61
	Linux old-k8s-version-091610 5.15.0-1067-aws #73~20.04.1-Ubuntu SMP Wed Jul 24 17:31:05 UTC 2024 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.4 LTS"
	
	
	==> kindnet [312b3b2145bf1c233ccc80613e0c67129fa905992c3e22c9a71929f05341b98f] <==
	I0819 12:24:22.032683       1 main.go:299] handling current node
	W0819 12:24:26.989818       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: failed to list *v1.Pod: pods is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "pods" in API group "" at the cluster scope
	E0819 12:24:26.989856       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "pods" in API group "" at the cluster scope
	W0819 12:24:27.511506       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "namespaces" in API group "" at the cluster scope
	E0819 12:24:27.511543       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "namespaces" in API group "" at the cluster scope
	I0819 12:24:32.032935       1 main.go:295] Handling node with IPs: map[192.168.85.2:{}]
	I0819 12:24:32.032977       1 main.go:299] handling current node
	I0819 12:24:42.031903       1 main.go:295] Handling node with IPs: map[192.168.85.2:{}]
	I0819 12:24:42.031956       1 main.go:299] handling current node
	I0819 12:24:52.032230       1 main.go:295] Handling node with IPs: map[192.168.85.2:{}]
	I0819 12:24:52.032269       1 main.go:299] handling current node
	W0819 12:24:59.701861       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: failed to list *v1.NetworkPolicy: networkpolicies.networking.k8s.io is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "networkpolicies" in API group "networking.k8s.io" at the cluster scope
	E0819 12:24:59.701902       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.NetworkPolicy: failed to list *v1.NetworkPolicy: networkpolicies.networking.k8s.io is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "networkpolicies" in API group "networking.k8s.io" at the cluster scope
	I0819 12:25:02.032445       1 main.go:295] Handling node with IPs: map[192.168.85.2:{}]
	I0819 12:25:02.032553       1 main.go:299] handling current node
	I0819 12:25:12.032051       1 main.go:295] Handling node with IPs: map[192.168.85.2:{}]
	I0819 12:25:12.032089       1 main.go:299] handling current node
	I0819 12:25:22.032251       1 main.go:295] Handling node with IPs: map[192.168.85.2:{}]
	I0819 12:25:22.032304       1 main.go:299] handling current node
	W0819 12:25:22.343258       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: failed to list *v1.Pod: pods is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "pods" in API group "" at the cluster scope
	E0819 12:25:22.343302       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "pods" in API group "" at the cluster scope
	W0819 12:25:23.482864       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "namespaces" in API group "" at the cluster scope
	E0819 12:25:23.482999       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "namespaces" in API group "" at the cluster scope
	I0819 12:25:32.033185       1 main.go:295] Handling node with IPs: map[192.168.85.2:{}]
	I0819 12:25:32.033403       1 main.go:299] handling current node
	
	
	==> kindnet [ff792901aeab0bc28facb26abfa78879ae3a7f0e523d2f2d7a83d2138d80c10b] <==
	I0819 12:18:10.660407       1 main.go:299] handling current node
	W0819 12:18:13.762342       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: failed to list *v1.NetworkPolicy: networkpolicies.networking.k8s.io is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "networkpolicies" in API group "networking.k8s.io" at the cluster scope
	E0819 12:18:13.762392       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.NetworkPolicy: failed to list *v1.NetworkPolicy: networkpolicies.networking.k8s.io is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "networkpolicies" in API group "networking.k8s.io" at the cluster scope
	W0819 12:18:20.193804       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: failed to list *v1.Pod: pods is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "pods" in API group "" at the cluster scope
	E0819 12:18:20.193923       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "pods" in API group "" at the cluster scope
	I0819 12:18:20.660625       1 main.go:295] Handling node with IPs: map[192.168.85.2:{}]
	I0819 12:18:20.660663       1 main.go:299] handling current node
	W0819 12:18:20.945220       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "namespaces" in API group "" at the cluster scope
	E0819 12:18:20.945255       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "namespaces" in API group "" at the cluster scope
	I0819 12:18:30.660212       1 main.go:295] Handling node with IPs: map[192.168.85.2:{}]
	I0819 12:18:30.660249       1 main.go:299] handling current node
	I0819 12:18:40.660223       1 main.go:295] Handling node with IPs: map[192.168.85.2:{}]
	I0819 12:18:40.660257       1 main.go:299] handling current node
	W0819 12:18:48.240992       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: failed to list *v1.Pod: pods is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "pods" in API group "" at the cluster scope
	E0819 12:18:48.241126       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "pods" in API group "" at the cluster scope
	W0819 12:18:49.232446       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "namespaces" in API group "" at the cluster scope
	E0819 12:18:49.232481       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "namespaces" in API group "" at the cluster scope
	I0819 12:18:50.660162       1 main.go:295] Handling node with IPs: map[192.168.85.2:{}]
	I0819 12:18:50.660197       1 main.go:299] handling current node
	I0819 12:19:00.660882       1 main.go:295] Handling node with IPs: map[192.168.85.2:{}]
	I0819 12:19:00.660922       1 main.go:299] handling current node
	W0819 12:19:04.689396       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: failed to list *v1.NetworkPolicy: networkpolicies.networking.k8s.io is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "networkpolicies" in API group "networking.k8s.io" at the cluster scope
	E0819 12:19:04.689514       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.NetworkPolicy: failed to list *v1.NetworkPolicy: networkpolicies.networking.k8s.io is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "networkpolicies" in API group "networking.k8s.io" at the cluster scope
	I0819 12:19:10.673682       1 main.go:295] Handling node with IPs: map[192.168.85.2:{}]
	I0819 12:19:10.673718       1 main.go:299] handling current node
	
	
	==> kube-apiserver [448906379c25acacfbe73890ef79d2faf13a76a8f18880099fda6187c53b0152] <==
	I0819 12:17:19.535034       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0819 12:17:19.567056       1 storage_scheduling.go:132] created PriorityClass system-node-critical with value 2000001000
	I0819 12:17:19.572194       1 storage_scheduling.go:132] created PriorityClass system-cluster-critical with value 2000000000
	I0819 12:17:19.572219       1 storage_scheduling.go:148] all system priority classes are created successfully or already exist.
	I0819 12:17:20.107744       1 controller.go:606] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0819 12:17:20.188823       1 controller.go:606] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	W0819 12:17:20.270045       1 lease.go:233] Resetting endpoints for master service "kubernetes" to [192.168.85.2]
	I0819 12:17:20.271368       1 controller.go:606] quota admission added evaluator for: endpoints
	I0819 12:17:20.276139       1 controller.go:606] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0819 12:17:21.142646       1 controller.go:606] quota admission added evaluator for: serviceaccounts
	I0819 12:17:22.023906       1 controller.go:606] quota admission added evaluator for: deployments.apps
	I0819 12:17:22.100530       1 controller.go:606] quota admission added evaluator for: daemonsets.apps
	I0819 12:17:30.486196       1 controller.go:606] quota admission added evaluator for: leases.coordination.k8s.io
	I0819 12:17:37.077496       1 controller.go:606] quota admission added evaluator for: replicasets.apps
	I0819 12:17:37.227267       1 controller.go:606] quota admission added evaluator for: controllerrevisions.apps
	I0819 12:17:48.751485       1 client.go:360] parsed scheme: "passthrough"
	I0819 12:17:48.751541       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0819 12:17:48.751551       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0819 12:18:29.268594       1 client.go:360] parsed scheme: "passthrough"
	I0819 12:18:29.268642       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0819 12:18:29.268652       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0819 12:19:10.893347       1 client.go:360] parsed scheme: "passthrough"
	I0819 12:19:10.893401       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0819 12:19:10.893413       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	E0819 12:19:11.664339       1 available_controller.go:508] v1beta1.metrics.k8s.io failed with: Operation cannot be fulfilled on apiservices.apiregistration.k8s.io "v1beta1.metrics.k8s.io": the object has been modified; please apply your changes to the latest version and try again
	
	
	==> kube-apiserver [4ce61d87754c1a74c50315141a3956f04c053f6e9bf8ed92eb2f1d41f61bac61] <==
	I0819 12:22:06.345371       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0819 12:22:06.345430       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0819 12:22:42.064313       1 client.go:360] parsed scheme: "passthrough"
	I0819 12:22:42.064363       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0819 12:22:42.064373       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	W0819 12:22:51.324995       1 handler_proxy.go:102] no RequestInfo found in the context
	E0819 12:22:51.325198       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0819 12:22:51.325214       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0819 12:23:16.759767       1 client.go:360] parsed scheme: "passthrough"
	I0819 12:23:16.759971       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0819 12:23:16.759990       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0819 12:23:53.517199       1 client.go:360] parsed scheme: "passthrough"
	I0819 12:23:53.517251       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0819 12:23:53.517260       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0819 12:24:28.189346       1 client.go:360] parsed scheme: "passthrough"
	I0819 12:24:28.189390       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0819 12:24:28.189399       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	W0819 12:24:49.963134       1 handler_proxy.go:102] no RequestInfo found in the context
	E0819 12:24:49.963414       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0819 12:24:49.963433       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0819 12:25:08.145572       1 client.go:360] parsed scheme: "passthrough"
	I0819 12:25:08.145618       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0819 12:25:08.145626       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	
	
	==> kube-controller-manager [b8c6ba6c65d67f62a14421151c3013537c37cfcf1bc0b08d90d27bda4241f566] <==
	I0819 12:17:37.252459       1 shared_informer.go:240] Waiting for caches to sync for cidrallocator
	I0819 12:17:37.252542       1 shared_informer.go:247] Caches are synced for cidrallocator 
	I0819 12:17:37.255187       1 shared_informer.go:247] Caches are synced for TTL 
	I0819 12:17:37.272378       1 event.go:291] "Event occurred" object="kube-system/kube-proxy" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-g2lvm"
	I0819 12:17:37.273191       1 event.go:291] "Event occurred" object="kube-system/kindnet" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-d6xbs"
	I0819 12:17:37.282992       1 shared_informer.go:247] Caches are synced for persistent volume 
	I0819 12:17:37.285968       1 shared_informer.go:247] Caches are synced for bootstrap_signer 
	I0819 12:17:37.286644       1 shared_informer.go:247] Caches are synced for expand 
	I0819 12:17:37.287906       1 shared_informer.go:247] Caches are synced for resource quota 
	I0819 12:17:37.305080       1 shared_informer.go:247] Caches are synced for attach detach 
	I0819 12:17:37.305369       1 shared_informer.go:247] Caches are synced for stateful set 
	I0819 12:17:37.306467       1 shared_informer.go:247] Caches are synced for resource quota 
	I0819 12:17:37.313931       1 shared_informer.go:247] Caches are synced for PVC protection 
	I0819 12:17:37.327641       1 range_allocator.go:373] Set node old-k8s-version-091610 PodCIDR to [10.244.0.0/24]
	E0819 12:17:37.347730       1 daemon_controller.go:320] kube-system/kube-proxy failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kube-proxy", GenerateName:"", Namespace:"kube-system", SelfLink:"", UID:"671c5ca2-ca0f-407a-8674-8cc0596496a1", ResourceVersion:"407", Generation:1, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63859666642, loc:(*time.Location)(0x632eb80)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string{"deprecated.daemonset.template.generation":"1"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kubeadm", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0x400194d400), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0x400194d420)}, v1.ManagedFieldsEntry{Manager:"kube-co
ntroller-manager", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0x400194d440), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0x400194d460)}}}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0x400194d4a0), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"kube-proxy", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElastic
BlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(0x400178df80), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSour
ce)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"xtables-lock", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0x400194d4c0), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSo
urce)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"lib-modules", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0x400194d4e0), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil),
Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"kube-proxy", Image:"k8s.gcr.io/kube-proxy:v1.20.0", Command:[]string{"/usr/local/bin/kube-proxy", "--config=/var/lib/kube-proxy/config.conf", "--hostname-override=$(NODE_NAME)"}, Args:[]string(nil),
WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"NODE_NAME", Value:"", ValueFrom:(*v1.EnvVarSource)(0x400194d520)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"kube-proxy", ReadOnly:false, MountPath:"/var/lib/kube-proxy", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"xtables-lock", ReadOnly:false, MountPath:"/run/xtables.lock", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"lib-modules", ReadOnly:true, MountPath:"/lib/modules", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"F
ile", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(0x4001850060), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0x400184e358), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string{"kubernetes.io/os":"linux"}, ServiceAccountName:"kube-proxy", DeprecatedServiceAccount:"kube-proxy", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0x400043e930), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"CriticalAddonsOnly", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)(nil)}, v1.Toleration{Key:"", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)
(nil)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"system-node-critical", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:"RollingUpdate", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0x4000721118)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0x400184e3a8)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:0, NumberMisscheduled:0, DesiredNumberScheduled:1, NumberReady:0, ObservedGeneration:1, UpdatedNumberScheduled:0, NumberAvailable:0, NumberUnavailable:1, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps "kube-proxy": the object has been modified; please apply your changes to the latest ve
rsion and try again
	E0819 12:17:37.349928       1 daemon_controller.go:320] kube-system/kindnet failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kindnet", GenerateName:"", Namespace:"kube-system", SelfLink:"", UID:"d28356d0-1d83-4f8f-b200-55214561e19d", ResourceVersion:"282", Generation:1, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63859666642, loc:(*time.Location)(0x632eb80)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"kindnet", "k8s-app":"kindnet", "tier":"node"}, Annotations:map[string]string{"deprecated.daemonset.template.generation":"1", "kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"apps/v1\",\"kind\":\"DaemonSet\",\"metadata\":{\"annotations\":{},\"labels\":{\"app\":\"kindnet\",\"k8s-app\":\"kindnet\",\"tier\":\"node\"},\"name\":\"kindnet\",\"namespace\":\"kube-system\"},\"spec\":{\"selector\":{\"matchLabels\":{\"app\":\"k
indnet\"}},\"template\":{\"metadata\":{\"labels\":{\"app\":\"kindnet\",\"k8s-app\":\"kindnet\",\"tier\":\"node\"}},\"spec\":{\"containers\":[{\"env\":[{\"name\":\"HOST_IP\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"status.hostIP\"}}},{\"name\":\"POD_IP\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"status.podIP\"}}},{\"name\":\"POD_SUBNET\",\"value\":\"10.244.0.0/16\"}],\"image\":\"docker.io/kindest/kindnetd:v20240813-c6f155d6\",\"name\":\"kindnet-cni\",\"resources\":{\"limits\":{\"cpu\":\"100m\",\"memory\":\"50Mi\"},\"requests\":{\"cpu\":\"100m\",\"memory\":\"50Mi\"}},\"securityContext\":{\"capabilities\":{\"add\":[\"NET_RAW\",\"NET_ADMIN\"]},\"privileged\":false},\"volumeMounts\":[{\"mountPath\":\"/etc/cni/net.d\",\"name\":\"cni-cfg\"},{\"mountPath\":\"/run/xtables.lock\",\"name\":\"xtables-lock\",\"readOnly\":false},{\"mountPath\":\"/lib/modules\",\"name\":\"lib-modules\",\"readOnly\":true}]}],\"hostNetwork\":true,\"serviceAccountName\":\"kindnet\",\"tolerations\":[{\"effect\":\"NoSchedule\",\"operator\
":\"Exists\"}],\"volumes\":[{\"hostPath\":{\"path\":\"/etc/cni/net.d\",\"type\":\"DirectoryOrCreate\"},\"name\":\"cni-cfg\"},{\"hostPath\":{\"path\":\"/run/xtables.lock\",\"type\":\"FileOrCreate\"},\"name\":\"xtables-lock\"},{\"hostPath\":{\"path\":\"/lib/modules\"},\"name\":\"lib-modules\"}]}}}}\n"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kubectl-client-side-apply", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0x4001dfa000), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0x4001dfa020)}}}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0x4001dfa040), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string
{"app":"kindnet", "k8s-app":"kindnet", "tier":"node"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"cni-cfg", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0x4001dfa060), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil),
FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"xtables-lock", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0x4001dfa080), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.Glust
erfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"lib-modules", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0x4001dfa0a0), EmptyDi
r:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil),
PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"kindnet-cni", Image:"docker.io/kindest/kindnetd:v20240813-c6f155d6", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"HOST_IP", Value:"", ValueFrom:(*v1.EnvVarSource)(0x4001dfa0c0)}, v1.EnvVar{Name:"POD_IP", Value:"", ValueFrom:(*v1.EnvVarSource)(0x4001dfa100)}, v1.EnvVar{Name:"POD_SUBNET", Value:"10.244.0.0/16", ValueFrom:(*v1.EnvVarSource)(nil)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:
0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"50Mi", Format:"BinarySI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"50Mi", Format:"BinarySI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"cni-cfg", ReadOnly:false, MountPath:"/etc/cni/net.d", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"xtables-lock", ReadOnly:false, MountPath:"/run/xtables.lock", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"lib-modules", ReadOnly:true, MountPath:"/lib/modules", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:
(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(0x4001d3b0e0), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0x4001dcad68), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"kindnet", DeprecatedServiceAccount:"kindnet", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0x4000b8d650), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"", Operator:"Exists", Value:"", Effect:"NoSchedule", TolerationSeconds:(*int64)(nil)}},
HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:"RollingUpdate", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0x400097d9a0)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0x4001dcadb0)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:0, NumberMisscheduled:0, DesiredNumberScheduled:0, NumberReady:0, ObservedGeneration:0, UpdatedNumberScheduled:0, NumberAvailable:0, NumberUnavailable:0, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps "kindnet": the object has been modified; please apply your changes to the latest version and try again
	I0819 12:17:37.458213       1 shared_informer.go:240] Waiting for caches to sync for garbage collector
	I0819 12:17:37.751172       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0819 12:17:37.751195       1 garbagecollector.go:151] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0819 12:17:37.758405       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0819 12:17:39.052362       1 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-74ff55c5b to 1"
	I0819 12:17:39.109613       1 event.go:291] "Event occurred" object="kube-system/coredns-74ff55c5b" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-74ff55c5b-q4gqw"
	I0819 12:17:42.177907       1 node_lifecycle_controller.go:1222] Controller detected that some Nodes are Ready. Exiting master disruption mode.
	I0819 12:19:11.157772       1 event.go:291] "Event occurred" object="kube-system/metrics-server" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set metrics-server-9975d5f86 to 1"
	E0819 12:19:11.406708       1 clusterroleaggregation_controller.go:181] view failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "view": the object has been modified; please apply your changes to the latest version and try again
	
	
	==> kube-controller-manager [ff22b0055b8eb8b604f52a3bf6c5df44fc6b0ff5546a72d4f2b20b85080af524] <==
	W0819 12:21:12.960064       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0819 12:21:38.091834       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0819 12:21:44.610582       1 request.go:655] Throttling request took 1.048530277s, request: GET:https://192.168.85.2:8443/apis/coordination.k8s.io/v1?timeout=32s
	W0819 12:21:45.462825       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0819 12:22:08.594573       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0819 12:22:17.114185       1 request.go:655] Throttling request took 1.047900244s, request: GET:https://192.168.85.2:8443/apis/extensions/v1beta1?timeout=32s
	W0819 12:22:17.965664       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0819 12:22:39.097058       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0819 12:22:49.616064       1 request.go:655] Throttling request took 1.047950433s, request: GET:https://192.168.85.2:8443/apis/extensions/v1beta1?timeout=32s
	W0819 12:22:50.467635       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0819 12:23:09.599057       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0819 12:23:22.118158       1 request.go:655] Throttling request took 1.047269723s, request: GET:https://192.168.85.2:8443/apis/networking.k8s.io/v1beta1?timeout=32s
	W0819 12:23:22.969798       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0819 12:23:40.100988       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0819 12:23:54.620178       1 request.go:655] Throttling request took 1.048223201s, request: GET:https://192.168.85.2:8443/apis/extensions/v1beta1?timeout=32s
	W0819 12:23:55.471802       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0819 12:24:10.602928       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0819 12:24:27.122281       1 request.go:655] Throttling request took 1.048425409s, request: GET:https://192.168.85.2:8443/apis/extensions/v1beta1?timeout=32s
	W0819 12:24:27.973693       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0819 12:24:41.104818       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0819 12:24:59.624159       1 request.go:655] Throttling request took 1.047907369s, request: GET:https://192.168.85.2:8443/apis/extensions/v1beta1?timeout=32s
	W0819 12:25:00.476061       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0819 12:25:11.648367       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0819 12:25:32.126500       1 request.go:655] Throttling request took 1.048267474s, request: GET:https://192.168.85.2:8443/apis/extensions/v1beta1?timeout=32s
	W0819 12:25:32.977944       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	
	
	==> kube-proxy [495863fa417577517a6659a9363e132d473fde25375c55ba292884732c5b5cc6] <==
	I0819 12:17:38.212797       1 node.go:172] Successfully retrieved node IP: 192.168.85.2
	I0819 12:17:38.212890       1 server_others.go:142] kube-proxy node IP is an IPv4 address (192.168.85.2), assume IPv4 operation
	W0819 12:17:38.322891       1 server_others.go:578] Unknown proxy mode "", assuming iptables proxy
	I0819 12:17:38.322986       1 server_others.go:185] Using iptables Proxier.
	I0819 12:17:38.323189       1 server.go:650] Version: v1.20.0
	I0819 12:17:38.323687       1 config.go:315] Starting service config controller
	I0819 12:17:38.323718       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0819 12:17:38.324470       1 config.go:224] Starting endpoint slice config controller
	I0819 12:17:38.324478       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	I0819 12:17:38.424803       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	I0819 12:17:38.424878       1 shared_informer.go:247] Caches are synced for service config 
	
	
	==> kube-proxy [8581310ffb6da62d730c00416e9d418c1fd194d0459e551c98e90cd0193dc9be] <==
	I0819 12:19:51.836047       1 node.go:172] Successfully retrieved node IP: 192.168.85.2
	I0819 12:19:51.836339       1 server_others.go:142] kube-proxy node IP is an IPv4 address (192.168.85.2), assume IPv4 operation
	W0819 12:19:51.855054       1 server_others.go:578] Unknown proxy mode "", assuming iptables proxy
	I0819 12:19:51.855367       1 server_others.go:185] Using iptables Proxier.
	I0819 12:19:51.855739       1 server.go:650] Version: v1.20.0
	I0819 12:19:51.856481       1 config.go:315] Starting service config controller
	I0819 12:19:51.857277       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0819 12:19:51.856588       1 config.go:224] Starting endpoint slice config controller
	I0819 12:19:51.859485       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	I0819 12:19:51.957466       1 shared_informer.go:247] Caches are synced for service config 
	I0819 12:19:51.959636       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	
	
	==> kube-scheduler [1ef419f5f067970c35ce55d03d0e7a36fdebdd452881d8f478b7e537af217a32] <==
	W0819 12:17:18.777066       1 authentication.go:332] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0819 12:17:18.777229       1 authentication.go:333] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0819 12:17:18.777384       1 authentication.go:334] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0819 12:17:18.817633       1 secure_serving.go:197] Serving securely on 127.0.0.1:10259
	I0819 12:17:18.818239       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0819 12:17:18.818258       1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0819 12:17:18.818273       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	E0819 12:17:18.822504       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0819 12:17:18.823502       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0819 12:17:18.827513       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0819 12:17:18.827602       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0819 12:17:18.827708       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0819 12:17:18.827790       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0819 12:17:18.828465       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0819 12:17:18.828669       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0819 12:17:18.834569       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0819 12:17:18.834945       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0819 12:17:18.835102       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0819 12:17:18.835237       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0819 12:17:19.673896       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0819 12:17:19.774727       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0819 12:17:19.852046       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0819 12:17:19.908101       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0819 12:17:20.178807       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0819 12:17:21.918423       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	
	==> kube-scheduler [309ceea1b6362201d696baaee9f77608d461fce74f091280d398876fad125712] <==
	I0819 12:19:43.710453       1 serving.go:331] Generated self-signed cert in-memory
	W0819 12:19:48.684564       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0819 12:19:48.684611       1 authentication.go:332] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0819 12:19:48.684621       1 authentication.go:333] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0819 12:19:48.684627       1 authentication.go:334] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0819 12:19:49.015957       1 secure_serving.go:197] Serving securely on 127.0.0.1:10259
	I0819 12:19:49.016286       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0819 12:19:49.016317       1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0819 12:19:49.016336       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	I0819 12:19:49.117034       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	
	==> kubelet <==
	Aug 19 12:23:57 old-k8s-version-091610 kubelet[673]: E0819 12:23:57.769849     673 pod_workers.go:191] Error syncing pod 67d879d9-c2b3-4d91-8855-4f6007f01c6e ("dashboard-metrics-scraper-8d5bb5db8-kgs2g_kubernetes-dashboard(67d879d9-c2b3-4d91-8855-4f6007f01c6e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kgs2g_kubernetes-dashboard(67d879d9-c2b3-4d91-8855-4f6007f01c6e)"
	Aug 19 12:24:03 old-k8s-version-091610 kubelet[673]: E0819 12:24:03.769943     673 pod_workers.go:191] Error syncing pod 4085e2df-7e89-44a8-b234-c4b001bdff1d ("metrics-server-9975d5f86-zb7nt_kube-system(4085e2df-7e89-44a8-b234-c4b001bdff1d)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Aug 19 12:24:12 old-k8s-version-091610 kubelet[673]: I0819 12:24:12.769253     673 scope.go:95] [topologymanager] RemoveContainer - Container ID: 420c77d0fb766d92374f469eb83159e0d4649bc9d95c8a015b2d41133f73322e
	Aug 19 12:24:12 old-k8s-version-091610 kubelet[673]: E0819 12:24:12.770074     673 pod_workers.go:191] Error syncing pod 67d879d9-c2b3-4d91-8855-4f6007f01c6e ("dashboard-metrics-scraper-8d5bb5db8-kgs2g_kubernetes-dashboard(67d879d9-c2b3-4d91-8855-4f6007f01c6e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kgs2g_kubernetes-dashboard(67d879d9-c2b3-4d91-8855-4f6007f01c6e)"
	Aug 19 12:24:18 old-k8s-version-091610 kubelet[673]: E0819 12:24:18.770049     673 pod_workers.go:191] Error syncing pod 4085e2df-7e89-44a8-b234-c4b001bdff1d ("metrics-server-9975d5f86-zb7nt_kube-system(4085e2df-7e89-44a8-b234-c4b001bdff1d)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Aug 19 12:24:27 old-k8s-version-091610 kubelet[673]: I0819 12:24:27.769191     673 scope.go:95] [topologymanager] RemoveContainer - Container ID: 420c77d0fb766d92374f469eb83159e0d4649bc9d95c8a015b2d41133f73322e
	Aug 19 12:24:27 old-k8s-version-091610 kubelet[673]: E0819 12:24:27.769551     673 pod_workers.go:191] Error syncing pod 67d879d9-c2b3-4d91-8855-4f6007f01c6e ("dashboard-metrics-scraper-8d5bb5db8-kgs2g_kubernetes-dashboard(67d879d9-c2b3-4d91-8855-4f6007f01c6e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kgs2g_kubernetes-dashboard(67d879d9-c2b3-4d91-8855-4f6007f01c6e)"
	Aug 19 12:24:32 old-k8s-version-091610 kubelet[673]: E0819 12:24:32.769991     673 pod_workers.go:191] Error syncing pod 4085e2df-7e89-44a8-b234-c4b001bdff1d ("metrics-server-9975d5f86-zb7nt_kube-system(4085e2df-7e89-44a8-b234-c4b001bdff1d)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Aug 19 12:24:39 old-k8s-version-091610 kubelet[673]: I0819 12:24:39.769285     673 scope.go:95] [topologymanager] RemoveContainer - Container ID: 420c77d0fb766d92374f469eb83159e0d4649bc9d95c8a015b2d41133f73322e
	Aug 19 12:24:39 old-k8s-version-091610 kubelet[673]: E0819 12:24:39.769647     673 pod_workers.go:191] Error syncing pod 67d879d9-c2b3-4d91-8855-4f6007f01c6e ("dashboard-metrics-scraper-8d5bb5db8-kgs2g_kubernetes-dashboard(67d879d9-c2b3-4d91-8855-4f6007f01c6e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kgs2g_kubernetes-dashboard(67d879d9-c2b3-4d91-8855-4f6007f01c6e)"
	Aug 19 12:24:47 old-k8s-version-091610 kubelet[673]: E0819 12:24:47.769929     673 pod_workers.go:191] Error syncing pod 4085e2df-7e89-44a8-b234-c4b001bdff1d ("metrics-server-9975d5f86-zb7nt_kube-system(4085e2df-7e89-44a8-b234-c4b001bdff1d)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Aug 19 12:24:52 old-k8s-version-091610 kubelet[673]: I0819 12:24:52.769259     673 scope.go:95] [topologymanager] RemoveContainer - Container ID: 420c77d0fb766d92374f469eb83159e0d4649bc9d95c8a015b2d41133f73322e
	Aug 19 12:24:52 old-k8s-version-091610 kubelet[673]: E0819 12:24:52.769598     673 pod_workers.go:191] Error syncing pod 67d879d9-c2b3-4d91-8855-4f6007f01c6e ("dashboard-metrics-scraper-8d5bb5db8-kgs2g_kubernetes-dashboard(67d879d9-c2b3-4d91-8855-4f6007f01c6e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kgs2g_kubernetes-dashboard(67d879d9-c2b3-4d91-8855-4f6007f01c6e)"
	Aug 19 12:25:02 old-k8s-version-091610 kubelet[673]: E0819 12:25:02.770111     673 pod_workers.go:191] Error syncing pod 4085e2df-7e89-44a8-b234-c4b001bdff1d ("metrics-server-9975d5f86-zb7nt_kube-system(4085e2df-7e89-44a8-b234-c4b001bdff1d)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Aug 19 12:25:07 old-k8s-version-091610 kubelet[673]: I0819 12:25:07.769308     673 scope.go:95] [topologymanager] RemoveContainer - Container ID: 420c77d0fb766d92374f469eb83159e0d4649bc9d95c8a015b2d41133f73322e
	Aug 19 12:25:07 old-k8s-version-091610 kubelet[673]: E0819 12:25:07.769696     673 pod_workers.go:191] Error syncing pod 67d879d9-c2b3-4d91-8855-4f6007f01c6e ("dashboard-metrics-scraper-8d5bb5db8-kgs2g_kubernetes-dashboard(67d879d9-c2b3-4d91-8855-4f6007f01c6e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kgs2g_kubernetes-dashboard(67d879d9-c2b3-4d91-8855-4f6007f01c6e)"
	Aug 19 12:25:16 old-k8s-version-091610 kubelet[673]: E0819 12:25:16.770077     673 pod_workers.go:191] Error syncing pod 4085e2df-7e89-44a8-b234-c4b001bdff1d ("metrics-server-9975d5f86-zb7nt_kube-system(4085e2df-7e89-44a8-b234-c4b001bdff1d)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Aug 19 12:25:21 old-k8s-version-091610 kubelet[673]: I0819 12:25:21.769219     673 scope.go:95] [topologymanager] RemoveContainer - Container ID: 420c77d0fb766d92374f469eb83159e0d4649bc9d95c8a015b2d41133f73322e
	Aug 19 12:25:21 old-k8s-version-091610 kubelet[673]: E0819 12:25:21.769596     673 pod_workers.go:191] Error syncing pod 67d879d9-c2b3-4d91-8855-4f6007f01c6e ("dashboard-metrics-scraper-8d5bb5db8-kgs2g_kubernetes-dashboard(67d879d9-c2b3-4d91-8855-4f6007f01c6e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kgs2g_kubernetes-dashboard(67d879d9-c2b3-4d91-8855-4f6007f01c6e)"
	Aug 19 12:25:29 old-k8s-version-091610 kubelet[673]: E0819 12:25:29.780406     673 remote_image.go:113] PullImage "fake.domain/registry.k8s.io/echoserver:1.4" from image service failed: rpc error: code = Unknown desc = failed to pull and unpack image "fake.domain/registry.k8s.io/echoserver:1.4": failed to resolve reference "fake.domain/registry.k8s.io/echoserver:1.4": failed to do request: Head "https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host
	Aug 19 12:25:29 old-k8s-version-091610 kubelet[673]: E0819 12:25:29.780466     673 kuberuntime_image.go:51] Pull image "fake.domain/registry.k8s.io/echoserver:1.4" failed: rpc error: code = Unknown desc = failed to pull and unpack image "fake.domain/registry.k8s.io/echoserver:1.4": failed to resolve reference "fake.domain/registry.k8s.io/echoserver:1.4": failed to do request: Head "https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host
	Aug 19 12:25:29 old-k8s-version-091610 kubelet[673]: E0819 12:25:29.780607     673 kuberuntime_manager.go:829] container &Container{Name:metrics-server,Image:fake.domain/registry.k8s.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=60s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{209715200 0} {<nil>}  BinarySI},},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:metrics-server-token-hgch9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:&Probe{Handler:Handler{Exec
:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,},ReadinessProbe:&Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod metrics-server-9975d5f86-zb7nt_kube-system(4085e2d
f-7e89-44a8-b234-c4b001bdff1d): ErrImagePull: rpc error: code = Unknown desc = failed to pull and unpack image "fake.domain/registry.k8s.io/echoserver:1.4": failed to resolve reference "fake.domain/registry.k8s.io/echoserver:1.4": failed to do request: Head "https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host
	Aug 19 12:25:29 old-k8s-version-091610 kubelet[673]: E0819 12:25:29.780645     673 pod_workers.go:191] Error syncing pod 4085e2df-7e89-44a8-b234-c4b001bdff1d ("metrics-server-9975d5f86-zb7nt_kube-system(4085e2df-7e89-44a8-b234-c4b001bdff1d)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	Aug 19 12:25:32 old-k8s-version-091610 kubelet[673]: I0819 12:25:32.769206     673 scope.go:95] [topologymanager] RemoveContainer - Container ID: 420c77d0fb766d92374f469eb83159e0d4649bc9d95c8a015b2d41133f73322e
	Aug 19 12:25:32 old-k8s-version-091610 kubelet[673]: E0819 12:25:32.769532     673 pod_workers.go:191] Error syncing pod 67d879d9-c2b3-4d91-8855-4f6007f01c6e ("dashboard-metrics-scraper-8d5bb5db8-kgs2g_kubernetes-dashboard(67d879d9-c2b3-4d91-8855-4f6007f01c6e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kgs2g_kubernetes-dashboard(67d879d9-c2b3-4d91-8855-4f6007f01c6e)"
	
	
	==> kubernetes-dashboard [e333f18f594f1e8bfc8059d2d10fca8e2977c7d6931e0738413013bc0a844e60] <==
	2024/08/19 12:20:19 Using namespace: kubernetes-dashboard
	2024/08/19 12:20:19 Using in-cluster config to connect to apiserver
	2024/08/19 12:20:19 Using secret token for csrf signing
	2024/08/19 12:20:19 Initializing csrf token from kubernetes-dashboard-csrf secret
	2024/08/19 12:20:19 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2024/08/19 12:20:19 Successful initial request to the apiserver, version: v1.20.0
	2024/08/19 12:20:19 Generating JWE encryption key
	2024/08/19 12:20:19 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2024/08/19 12:20:19 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2024/08/19 12:20:19 Initializing JWE encryption key from synchronized object
	2024/08/19 12:20:19 Creating in-cluster Sidecar client
	2024/08/19 12:20:19 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/08/19 12:20:19 Serving insecurely on HTTP port: 9090
	2024/08/19 12:20:49 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/08/19 12:21:19 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/08/19 12:21:49 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/08/19 12:22:19 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/08/19 12:22:49 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/08/19 12:23:19 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/08/19 12:23:49 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/08/19 12:24:19 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/08/19 12:24:49 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/08/19 12:25:19 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/08/19 12:20:19 Starting overwatch
	
	
	==> storage-provisioner [7a63c07299e71c1920b148c4c2cd68ce0fc64d5359eb08ebc374e073275d266b] <==
	I0819 12:17:39.508026       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0819 12:17:39.527426       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0819 12:17:39.527517       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0819 12:17:39.550392       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0819 12:17:39.553242       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-091610_2ef61507-a5ef-490b-b3e9-cb07ebaa8f95!
	I0819 12:17:39.567670       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"890f460c-7aff-453c-b508-cfc0d53bd4ea", APIVersion:"v1", ResourceVersion:"470", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-091610_2ef61507-a5ef-490b-b3e9-cb07ebaa8f95 became leader
	I0819 12:17:39.654696       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-091610_2ef61507-a5ef-490b-b3e9-cb07ebaa8f95!
	
	
	==> storage-provisioner [d6ba97b27a6fcb89c2d05f135c98e06be837786617de7736914f6711ab33c929] <==
	I0819 12:19:52.698564       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0819 12:19:52.716687       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0819 12:19:52.716750       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0819 12:20:10.208483       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0819 12:20:10.216179       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"890f460c-7aff-453c-b508-cfc0d53bd4ea", APIVersion:"v1", ResourceVersion:"778", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-091610_18d028df-9f61-49cc-b88c-57877d4b4a2a became leader
	I0819 12:20:10.216529       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-091610_18d028df-9f61-49cc-b88c-57877d4b4a2a!
	I0819 12:20:10.317360       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-091610_18d028df-9f61-49cc-b88c-57877d4b4a2a!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-091610 -n old-k8s-version-091610
helpers_test.go:261: (dbg) Run:  kubectl --context old-k8s-version-091610 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-9975d5f86-zb7nt
helpers_test.go:274: ======> post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context old-k8s-version-091610 describe pod metrics-server-9975d5f86-zb7nt
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context old-k8s-version-091610 describe pod metrics-server-9975d5f86-zb7nt: exit status 1 (144.641866ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-9975d5f86-zb7nt" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context old-k8s-version-091610 describe pod metrics-server-9975d5f86-zb7nt: exit status 1
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (375.74s)

                                                
                                    

Test pass (298/328)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 7.34
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.07
9 TestDownloadOnly/v1.20.0/DeleteAll 0.21
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.14
12 TestDownloadOnly/v1.31.0/json-events 4.91
13 TestDownloadOnly/v1.31.0/preload-exists 0
17 TestDownloadOnly/v1.31.0/LogsDuration 0.07
18 TestDownloadOnly/v1.31.0/DeleteAll 0.21
19 TestDownloadOnly/v1.31.0/DeleteAlwaysSucceeds 0.13
21 TestBinaryMirror 0.56
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.07
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.08
27 TestAddons/Setup 217.52
31 TestAddons/serial/GCPAuth/Namespaces 0.19
33 TestAddons/parallel/Registry 14.44
34 TestAddons/parallel/Ingress 18.74
35 TestAddons/parallel/InspektorGadget 11.99
36 TestAddons/parallel/MetricsServer 7.03
39 TestAddons/parallel/CSI 36.24
40 TestAddons/parallel/Headlamp 17
41 TestAddons/parallel/CloudSpanner 6.93
42 TestAddons/parallel/LocalPath 53.13
43 TestAddons/parallel/NvidiaDevicePlugin 5.56
44 TestAddons/parallel/Yakd 12.03
45 TestAddons/StoppedEnableDisable 12.22
46 TestCertOptions 39.51
47 TestCertExpiration 232.05
49 TestForceSystemdFlag 40.14
50 TestForceSystemdEnv 39.49
51 TestDockerEnvContainerd 43.66
56 TestErrorSpam/setup 34.75
57 TestErrorSpam/start 0.76
58 TestErrorSpam/status 1.13
59 TestErrorSpam/pause 1.89
60 TestErrorSpam/unpause 1.82
61 TestErrorSpam/stop 1.49
64 TestFunctional/serial/CopySyncFile 0
65 TestFunctional/serial/StartWithProxy 52.81
66 TestFunctional/serial/AuditLog 0
67 TestFunctional/serial/SoftStart 6.82
68 TestFunctional/serial/KubeContext 0.07
69 TestFunctional/serial/KubectlGetPods 0.09
72 TestFunctional/serial/CacheCmd/cache/add_remote 4.01
73 TestFunctional/serial/CacheCmd/cache/add_local 1.29
74 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
75 TestFunctional/serial/CacheCmd/cache/list 0.06
76 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.31
77 TestFunctional/serial/CacheCmd/cache/cache_reload 2.06
78 TestFunctional/serial/CacheCmd/cache/delete 0.11
79 TestFunctional/serial/MinikubeKubectlCmd 0.14
80 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.14
81 TestFunctional/serial/ExtraConfig 45.45
82 TestFunctional/serial/ComponentHealth 0.11
83 TestFunctional/serial/LogsCmd 1.79
84 TestFunctional/serial/LogsFileCmd 1.79
85 TestFunctional/serial/InvalidService 4.26
87 TestFunctional/parallel/ConfigCmd 0.53
88 TestFunctional/parallel/DashboardCmd 11.03
89 TestFunctional/parallel/DryRun 0.43
90 TestFunctional/parallel/InternationalLanguage 0.18
91 TestFunctional/parallel/StatusCmd 1.2
95 TestFunctional/parallel/ServiceCmdConnect 10.74
96 TestFunctional/parallel/AddonsCmd 0.17
97 TestFunctional/parallel/PersistentVolumeClaim 26.08
99 TestFunctional/parallel/SSHCmd 0.72
100 TestFunctional/parallel/CpCmd 2.39
102 TestFunctional/parallel/FileSync 0.38
103 TestFunctional/parallel/CertSync 2.7
107 TestFunctional/parallel/NodeLabels 0.11
109 TestFunctional/parallel/NonActiveRuntimeDisabled 0.52
111 TestFunctional/parallel/License 0.24
113 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.58
114 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
116 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 8.48
117 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.1
118 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
122 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
123 TestFunctional/parallel/ServiceCmd/DeployApp 7.21
124 TestFunctional/parallel/ProfileCmd/profile_not_create 0.47
125 TestFunctional/parallel/ServiceCmd/List 0.61
126 TestFunctional/parallel/ProfileCmd/profile_list 0.48
127 TestFunctional/parallel/ServiceCmd/JSONOutput 0.57
128 TestFunctional/parallel/ProfileCmd/profile_json_output 0.47
129 TestFunctional/parallel/ServiceCmd/HTTPS 0.59
130 TestFunctional/parallel/MountCmd/any-port 7.56
131 TestFunctional/parallel/ServiceCmd/Format 0.44
132 TestFunctional/parallel/ServiceCmd/URL 0.65
133 TestFunctional/parallel/MountCmd/specific-port 2.15
134 TestFunctional/parallel/MountCmd/VerifyCleanup 2.49
135 TestFunctional/parallel/Version/short 0.07
136 TestFunctional/parallel/Version/components 1.32
137 TestFunctional/parallel/ImageCommands/ImageListShort 0.27
138 TestFunctional/parallel/ImageCommands/ImageListTable 0.27
139 TestFunctional/parallel/ImageCommands/ImageListJson 0.28
140 TestFunctional/parallel/ImageCommands/ImageListYaml 0.32
141 TestFunctional/parallel/ImageCommands/ImageBuild 3.19
142 TestFunctional/parallel/ImageCommands/Setup 0.64
143 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.33
144 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1.63
145 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.73
146 TestFunctional/parallel/UpdateContextCmd/no_changes 0.2
147 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.22
148 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.2
149 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.42
150 TestFunctional/parallel/ImageCommands/ImageRemove 0.56
151 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.75
152 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.39
153 TestFunctional/delete_echo-server_images 0.04
154 TestFunctional/delete_my-image_image 0.02
155 TestFunctional/delete_minikube_cached_images 0.02
159 TestMultiControlPlane/serial/StartCluster 115.96
160 TestMultiControlPlane/serial/DeployApp 28.48
161 TestMultiControlPlane/serial/PingHostFromPods 1.64
162 TestMultiControlPlane/serial/AddWorkerNode 24.96
163 TestMultiControlPlane/serial/NodeLabels 0.12
164 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.77
165 TestMultiControlPlane/serial/CopyFile 19.14
166 TestMultiControlPlane/serial/StopSecondaryNode 12.81
167 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.57
168 TestMultiControlPlane/serial/RestartSecondaryNode 28.59
169 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.79
170 TestMultiControlPlane/serial/RestartClusterKeepsNodes 132.66
171 TestMultiControlPlane/serial/DeleteSecondaryNode 11.17
172 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.57
173 TestMultiControlPlane/serial/StopCluster 36.31
174 TestMultiControlPlane/serial/RestartCluster 76.89
175 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.55
176 TestMultiControlPlane/serial/AddSecondaryNode 39.34
177 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.8
181 TestJSONOutput/start/Command 51.32
182 TestJSONOutput/start/Audit 0
184 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
185 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
187 TestJSONOutput/pause/Command 0.77
188 TestJSONOutput/pause/Audit 0
190 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
191 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
193 TestJSONOutput/unpause/Command 0.68
194 TestJSONOutput/unpause/Audit 0
196 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
197 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
199 TestJSONOutput/stop/Command 5.72
200 TestJSONOutput/stop/Audit 0
202 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
203 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
204 TestErrorJSONOutput 0.21
206 TestKicCustomNetwork/create_custom_network 35.35
207 TestKicCustomNetwork/use_default_bridge_network 33.55
208 TestKicExistingNetwork 32.95
209 TestKicCustomSubnet 36.23
210 TestKicStaticIP 35.16
211 TestMainNoArgs 0.06
212 TestMinikubeProfile 68.59
215 TestMountStart/serial/StartWithMountFirst 6.61
216 TestMountStart/serial/VerifyMountFirst 0.25
217 TestMountStart/serial/StartWithMountSecond 6.47
218 TestMountStart/serial/VerifyMountSecond 0.25
219 TestMountStart/serial/DeleteFirst 1.65
220 TestMountStart/serial/VerifyMountPostDelete 0.26
221 TestMountStart/serial/Stop 1.2
222 TestMountStart/serial/RestartStopped 7.28
223 TestMountStart/serial/VerifyMountPostStop 0.27
226 TestMultiNode/serial/FreshStart2Nodes 69.72
227 TestMultiNode/serial/DeployApp2Nodes 16.77
228 TestMultiNode/serial/PingHostFrom2Pods 1.01
229 TestMultiNode/serial/AddNode 20
230 TestMultiNode/serial/MultiNodeLabels 0.09
231 TestMultiNode/serial/ProfileList 0.39
232 TestMultiNode/serial/CopyFile 10.07
233 TestMultiNode/serial/StopNode 2.34
234 TestMultiNode/serial/StartAfterStop 10.05
235 TestMultiNode/serial/RestartKeepsNodes 94.74
236 TestMultiNode/serial/DeleteNode 5.5
237 TestMultiNode/serial/StopMultiNode 24.2
238 TestMultiNode/serial/RestartMultiNode 49.71
239 TestMultiNode/serial/ValidateNameConflict 32.8
244 TestPreload 114.03
246 TestScheduledStopUnix 108.56
249 TestInsufficientStorage 13.29
250 TestRunningBinaryUpgrade 100.7
252 TestKubernetesUpgrade 107.18
253 TestMissingContainerUpgrade 177.33
255 TestNoKubernetes/serial/StartNoK8sWithVersion 0.08
256 TestNoKubernetes/serial/StartWithK8s 42.01
257 TestNoKubernetes/serial/StartWithStopK8s 17.85
258 TestNoKubernetes/serial/Start 8.48
259 TestNoKubernetes/serial/VerifyK8sNotRunning 0.35
260 TestNoKubernetes/serial/ProfileList 1.1
261 TestNoKubernetes/serial/Stop 1.27
262 TestNoKubernetes/serial/StartNoArgs 7.7
263 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.31
264 TestStoppedBinaryUpgrade/Setup 0.84
265 TestStoppedBinaryUpgrade/Upgrade 121.66
274 TestPause/serial/Start 61.4
275 TestStoppedBinaryUpgrade/MinikubeLogs 0.99
283 TestNetworkPlugins/group/false 4.25
284 TestPause/serial/SecondStartNoReconfiguration 6.95
288 TestPause/serial/Pause 1.09
289 TestPause/serial/VerifyStatus 0.41
290 TestPause/serial/Unpause 0.88
291 TestPause/serial/PauseAgain 1.2
292 TestPause/serial/DeletePaused 3.5
293 TestPause/serial/VerifyDeletedResources 0.38
295 TestStartStop/group/old-k8s-version/serial/FirstStart 146.21
296 TestStartStop/group/old-k8s-version/serial/DeployApp 8.72
297 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.95
298 TestStartStop/group/old-k8s-version/serial/Stop 12.65
300 TestStartStop/group/no-preload/serial/FirstStart 95.66
301 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.49
303 TestStartStop/group/no-preload/serial/DeployApp 8.49
304 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.25
305 TestStartStop/group/no-preload/serial/Stop 12.1
306 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.19
307 TestStartStop/group/no-preload/serial/SecondStart 266.84
308 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6.01
309 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6.01
310 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.11
311 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.16
312 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.37
313 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.38
314 TestStartStop/group/old-k8s-version/serial/Pause 3.99
315 TestStartStop/group/no-preload/serial/Pause 4.37
317 TestStartStop/group/embed-certs/serial/FirstStart 64.38
319 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 62.26
320 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 8.4
321 TestStartStop/group/embed-certs/serial/DeployApp 9.5
322 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.16
323 TestStartStop/group/default-k8s-diff-port/serial/Stop 12.26
324 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.17
325 TestStartStop/group/embed-certs/serial/Stop 12.03
326 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.17
327 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 292.79
328 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.18
329 TestStartStop/group/embed-certs/serial/SecondStart 272.26
330 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6.01
331 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.1
332 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.27
333 TestStartStop/group/embed-certs/serial/Pause 3.17
335 TestStartStop/group/newest-cni/serial/FirstStart 39.04
336 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6.01
337 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.12
338 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.33
339 TestStartStop/group/default-k8s-diff-port/serial/Pause 4.06
340 TestNetworkPlugins/group/auto/Start 66.36
341 TestStartStop/group/newest-cni/serial/DeployApp 0
342 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.82
343 TestStartStop/group/newest-cni/serial/Stop 1.39
344 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.22
345 TestStartStop/group/newest-cni/serial/SecondStart 21.85
346 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
347 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
348 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.26
349 TestStartStop/group/newest-cni/serial/Pause 4.12
350 TestNetworkPlugins/group/kindnet/Start 52.55
351 TestNetworkPlugins/group/auto/KubeletFlags 0.44
352 TestNetworkPlugins/group/auto/NetCatPod 12.37
353 TestNetworkPlugins/group/auto/DNS 0.2
354 TestNetworkPlugins/group/auto/Localhost 0.15
355 TestNetworkPlugins/group/auto/HairPin 0.16
356 TestNetworkPlugins/group/calico/Start 70.68
357 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
358 TestNetworkPlugins/group/kindnet/KubeletFlags 0.35
359 TestNetworkPlugins/group/kindnet/NetCatPod 10.37
360 TestNetworkPlugins/group/kindnet/DNS 0.26
361 TestNetworkPlugins/group/kindnet/Localhost 0.2
362 TestNetworkPlugins/group/kindnet/HairPin 0.17
363 TestNetworkPlugins/group/custom-flannel/Start 58.62
364 TestNetworkPlugins/group/calico/ControllerPod 6.01
365 TestNetworkPlugins/group/calico/KubeletFlags 0.36
366 TestNetworkPlugins/group/calico/NetCatPod 11.34
367 TestNetworkPlugins/group/calico/DNS 0.19
368 TestNetworkPlugins/group/calico/Localhost 0.25
369 TestNetworkPlugins/group/calico/HairPin 0.22
370 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.38
371 TestNetworkPlugins/group/custom-flannel/NetCatPod 10.4
372 TestNetworkPlugins/group/enable-default-cni/Start 50.82
373 TestNetworkPlugins/group/custom-flannel/DNS 0.23
374 TestNetworkPlugins/group/custom-flannel/Localhost 0.15
375 TestNetworkPlugins/group/custom-flannel/HairPin 0.23
376 TestNetworkPlugins/group/flannel/Start 54.67
377 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.38
378 TestNetworkPlugins/group/enable-default-cni/NetCatPod 11.36
379 TestNetworkPlugins/group/enable-default-cni/DNS 0.24
380 TestNetworkPlugins/group/enable-default-cni/Localhost 0.18
381 TestNetworkPlugins/group/enable-default-cni/HairPin 0.2
382 TestNetworkPlugins/group/flannel/ControllerPod 6.01
383 TestNetworkPlugins/group/bridge/Start 50.63
384 TestNetworkPlugins/group/flannel/KubeletFlags 0.37
385 TestNetworkPlugins/group/flannel/NetCatPod 11.57
386 TestNetworkPlugins/group/flannel/DNS 0.27
387 TestNetworkPlugins/group/flannel/Localhost 0.17
388 TestNetworkPlugins/group/flannel/HairPin 0.22
389 TestNetworkPlugins/group/bridge/KubeletFlags 0.28
390 TestNetworkPlugins/group/bridge/NetCatPod 9.25
391 TestNetworkPlugins/group/bridge/DNS 0.18
392 TestNetworkPlugins/group/bridge/Localhost 0.15
393 TestNetworkPlugins/group/bridge/HairPin 0.15
x
+
TestDownloadOnly/v1.20.0/json-events (7.34s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-475037 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-475037 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (7.337849238s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (7.34s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-475037
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-475037: exit status 85 (71.255639ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-475037 | jenkins | v1.33.1 | 19 Aug 24 11:31 UTC |          |
	|         | -p download-only-475037        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/19 11:31:15
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.22.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0819 11:31:15.227343  299196 out.go:345] Setting OutFile to fd 1 ...
	I0819 11:31:15.227501  299196 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 11:31:15.227512  299196 out.go:358] Setting ErrFile to fd 2...
	I0819 11:31:15.227518  299196 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 11:31:15.227790  299196 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19476-293809/.minikube/bin
	W0819 11:31:15.227937  299196 root.go:314] Error reading config file at /home/jenkins/minikube-integration/19476-293809/.minikube/config/config.json: open /home/jenkins/minikube-integration/19476-293809/.minikube/config/config.json: no such file or directory
	I0819 11:31:15.228353  299196 out.go:352] Setting JSON to true
	I0819 11:31:15.229269  299196 start.go:129] hostinfo: {"hostname":"ip-172-31-30-239","uptime":4423,"bootTime":1724062653,"procs":168,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I0819 11:31:15.229346  299196 start.go:139] virtualization:  
	I0819 11:31:15.232367  299196 out.go:97] [download-only-475037] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	W0819 11:31:15.232499  299196 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/19476-293809/.minikube/cache/preloaded-tarball: no such file or directory
	I0819 11:31:15.232545  299196 notify.go:220] Checking for updates...
	I0819 11:31:15.234201  299196 out.go:169] MINIKUBE_LOCATION=19476
	I0819 11:31:15.236309  299196 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0819 11:31:15.238259  299196 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19476-293809/kubeconfig
	I0819 11:31:15.239955  299196 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19476-293809/.minikube
	I0819 11:31:15.241868  299196 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W0819 11:31:15.245262  299196 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0819 11:31:15.245691  299196 driver.go:392] Setting default libvirt URI to qemu:///system
	I0819 11:31:15.273247  299196 docker.go:123] docker version: linux-27.1.2:Docker Engine - Community
	I0819 11:31:15.273363  299196 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0819 11:31:15.335481  299196 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:52 SystemTime:2024-08-19 11:31:15.325656935 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1067-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214900736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:27.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8fc6bcff51318944179630522a095cc9dbf9f353 Expected:8fc6bcff51318944179630522a095cc9dbf9f353} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1]] Warnings:<nil>}}
	I0819 11:31:15.335611  299196 docker.go:307] overlay module found
	I0819 11:31:15.337350  299196 out.go:97] Using the docker driver based on user configuration
	I0819 11:31:15.337384  299196 start.go:297] selected driver: docker
	I0819 11:31:15.337392  299196 start.go:901] validating driver "docker" against <nil>
	I0819 11:31:15.337516  299196 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0819 11:31:15.395392  299196 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:52 SystemTime:2024-08-19 11:31:15.385612292 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1067-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214900736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:27.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8fc6bcff51318944179630522a095cc9dbf9f353 Expected:8fc6bcff51318944179630522a095cc9dbf9f353} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1]] Warnings:<nil>}}
	I0819 11:31:15.395563  299196 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0819 11:31:15.395875  299196 start_flags.go:393] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I0819 11:31:15.396065  299196 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0819 11:31:15.398143  299196 out.go:169] Using Docker driver with root privileges
	I0819 11:31:15.399724  299196 cni.go:84] Creating CNI manager for ""
	I0819 11:31:15.399753  299196 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0819 11:31:15.399764  299196 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0819 11:31:15.399847  299196 start.go:340] cluster config:
	{Name:download-only-475037 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-475037 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 11:31:15.401694  299196 out.go:97] Starting "download-only-475037" primary control-plane node in "download-only-475037" cluster
	I0819 11:31:15.401714  299196 cache.go:121] Beginning downloading kic base image for docker with containerd
	I0819 11:31:15.403394  299196 out.go:97] Pulling base image v0.0.44-1723740748-19452 ...
	I0819 11:31:15.403435  299196 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime containerd
	I0819 11:31:15.403601  299196 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d in local docker daemon
	I0819 11:31:15.421635  299196 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d to local cache
	I0819 11:31:15.421835  299196 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d in local cache directory
	I0819 11:31:15.421933  299196 image.go:148] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d to local cache
	I0819 11:31:15.456807  299196 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4
	I0819 11:31:15.456845  299196 cache.go:56] Caching tarball of preloaded images
	I0819 11:31:15.457015  299196 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime containerd
	I0819 11:31:15.459086  299196 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0819 11:31:15.459116  299196 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4 ...
	I0819 11:31:15.542757  299196 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4?checksum=md5:7e3d48ccb9f143791669d02e14ce1643 -> /home/jenkins/minikube-integration/19476-293809/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4
	I0819 11:31:18.527018  299196 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d as a tarball
	I0819 11:31:19.249054  299196 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4 ...
	I0819 11:31:19.249155  299196 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/19476-293809/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4 ...
	
	
	* The control-plane node download-only-475037 host does not exist
	  To start a cluster, run: "minikube start -p download-only-475037"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.21s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-475037
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/json-events (4.91s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-985567 --force --alsologtostderr --kubernetes-version=v1.31.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-985567 --force --alsologtostderr --kubernetes-version=v1.31.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (4.914176969s)
--- PASS: TestDownloadOnly/v1.31.0/json-events (4.91s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/preload-exists
--- PASS: TestDownloadOnly/v1.31.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-985567
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-985567: exit status 85 (65.343473ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-475037 | jenkins | v1.33.1 | 19 Aug 24 11:31 UTC |                     |
	|         | -p download-only-475037        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.33.1 | 19 Aug 24 11:31 UTC | 19 Aug 24 11:31 UTC |
	| delete  | -p download-only-475037        | download-only-475037 | jenkins | v1.33.1 | 19 Aug 24 11:31 UTC | 19 Aug 24 11:31 UTC |
	| start   | -o=json --download-only        | download-only-985567 | jenkins | v1.33.1 | 19 Aug 24 11:31 UTC |                     |
	|         | -p download-only-985567        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0   |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/19 11:31:22
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.22.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0819 11:31:22.979794  299401 out.go:345] Setting OutFile to fd 1 ...
	I0819 11:31:22.979926  299401 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 11:31:22.979937  299401 out.go:358] Setting ErrFile to fd 2...
	I0819 11:31:22.979943  299401 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 11:31:22.980183  299401 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19476-293809/.minikube/bin
	I0819 11:31:22.980567  299401 out.go:352] Setting JSON to true
	I0819 11:31:22.981400  299401 start.go:129] hostinfo: {"hostname":"ip-172-31-30-239","uptime":4430,"bootTime":1724062653,"procs":164,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I0819 11:31:22.981469  299401 start.go:139] virtualization:  
	I0819 11:31:22.983456  299401 out.go:97] [download-only-985567] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	I0819 11:31:22.983704  299401 notify.go:220] Checking for updates...
	I0819 11:31:22.985218  299401 out.go:169] MINIKUBE_LOCATION=19476
	I0819 11:31:22.986969  299401 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0819 11:31:22.988722  299401 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19476-293809/kubeconfig
	I0819 11:31:22.990247  299401 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19476-293809/.minikube
	I0819 11:31:22.991982  299401 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W0819 11:31:22.995119  299401 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0819 11:31:22.995369  299401 driver.go:392] Setting default libvirt URI to qemu:///system
	I0819 11:31:23.020980  299401 docker.go:123] docker version: linux-27.1.2:Docker Engine - Community
	I0819 11:31:23.021096  299401 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0819 11:31:23.091643  299401 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:45 SystemTime:2024-08-19 11:31:23.08148041 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1067-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aarc
h64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214900736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:27.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8fc6bcff51318944179630522a095cc9dbf9f353 Expected:8fc6bcff51318944179630522a095cc9dbf9f353} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerError
s:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1]] Warnings:<nil>}}
	I0819 11:31:23.091769  299401 docker.go:307] overlay module found
	I0819 11:31:23.093542  299401 out.go:97] Using the docker driver based on user configuration
	I0819 11:31:23.093573  299401 start.go:297] selected driver: docker
	I0819 11:31:23.093581  299401 start.go:901] validating driver "docker" against <nil>
	I0819 11:31:23.093691  299401 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0819 11:31:23.157725  299401 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:45 SystemTime:2024-08-19 11:31:23.148911774 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1067-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214900736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:27.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8fc6bcff51318944179630522a095cc9dbf9f353 Expected:8fc6bcff51318944179630522a095cc9dbf9f353} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1]] Warnings:<nil>}}
	I0819 11:31:23.157891  299401 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0819 11:31:23.158170  299401 start_flags.go:393] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I0819 11:31:23.158341  299401 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0819 11:31:23.160267  299401 out.go:169] Using Docker driver with root privileges
	
	
	* The control-plane node download-only-985567 host does not exist
	  To start a cluster, run: "minikube start -p download-only-985567"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/DeleteAll (0.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.31.0/DeleteAll (0.21s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-985567
--- PASS: TestDownloadOnly/v1.31.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestBinaryMirror (0.56s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p binary-mirror-274914 --alsologtostderr --binary-mirror http://127.0.0.1:39509 --driver=docker  --container-runtime=containerd
helpers_test.go:175: Cleaning up "binary-mirror-274914" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p binary-mirror-274914
--- PASS: TestBinaryMirror (0.56s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1037: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-288312
addons_test.go:1037: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable dashboard -p addons-288312: exit status 85 (65.545727ms)

                                                
                                                
-- stdout --
	* Profile "addons-288312" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-288312"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.08s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1048: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-288312
addons_test.go:1048: (dbg) Non-zero exit: out/minikube-linux-arm64 addons disable dashboard -p addons-288312: exit status 85 (79.739898ms)

                                                
                                                
-- stdout --
	* Profile "addons-288312" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-288312"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.08s)

                                                
                                    
x
+
TestAddons/Setup (217.52s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:110: (dbg) Run:  out/minikube-linux-arm64 start -p addons-288312 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=docker  --container-runtime=containerd --addons=ingress --addons=ingress-dns
addons_test.go:110: (dbg) Done: out/minikube-linux-arm64 start -p addons-288312 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=docker  --container-runtime=containerd --addons=ingress --addons=ingress-dns: (3m37.522086587s)
--- PASS: TestAddons/Setup (217.52s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.19s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:656: (dbg) Run:  kubectl --context addons-288312 create ns new-namespace
addons_test.go:670: (dbg) Run:  kubectl --context addons-288312 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.19s)

                                                
                                    
x
+
TestAddons/parallel/Registry (14.44s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:332: registry stabilized in 4.779468ms
addons_test.go:334: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-6fb4cdfc84-2w4jh" [21555be8-e2ba-4037-9a5a-a4120f29c7b9] Running
addons_test.go:334: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.004081157s
addons_test.go:337: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-wqcnn" [ca7f7a74-631a-42a9-91cd-00a451340d6b] Running
addons_test.go:337: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.003610548s
addons_test.go:342: (dbg) Run:  kubectl --context addons-288312 delete po -l run=registry-test --now
addons_test.go:347: (dbg) Run:  kubectl --context addons-288312 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:347: (dbg) Done: kubectl --context addons-288312 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (3.424996287s)
addons_test.go:361: (dbg) Run:  out/minikube-linux-arm64 -p addons-288312 ip
2024/08/19 11:39:00 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:390: (dbg) Run:  out/minikube-linux-arm64 -p addons-288312 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (14.44s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (18.74s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-288312 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-288312 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-288312 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [2efe5504-33eb-4e72-bd48-2ed7697c0506] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [2efe5504-33eb-4e72-bd48-2ed7697c0506] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 8.004370266s
addons_test.go:264: (dbg) Run:  out/minikube-linux-arm64 -p addons-288312 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:288: (dbg) Run:  kubectl --context addons-288312 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-linux-arm64 -p addons-288312 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:308: (dbg) Run:  out/minikube-linux-arm64 -p addons-288312 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:308: (dbg) Done: out/minikube-linux-arm64 -p addons-288312 addons disable ingress-dns --alsologtostderr -v=1: (1.047772621s)
addons_test.go:313: (dbg) Run:  out/minikube-linux-arm64 -p addons-288312 addons disable ingress --alsologtostderr -v=1
addons_test.go:313: (dbg) Done: out/minikube-linux-arm64 -p addons-288312 addons disable ingress --alsologtostderr -v=1: (7.83753746s)
--- PASS: TestAddons/parallel/Ingress (18.74s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (11.99s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:848: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-7j5j6" [d8a09e01-a318-4543-8ef9-09bd8e0603da] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:848: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.004863261s
addons_test.go:851: (dbg) Run:  out/minikube-linux-arm64 addons disable inspektor-gadget -p addons-288312
addons_test.go:851: (dbg) Done: out/minikube-linux-arm64 addons disable inspektor-gadget -p addons-288312: (5.982844962s)
--- PASS: TestAddons/parallel/InspektorGadget (11.99s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (7.03s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:409: metrics-server stabilized in 3.526922ms
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-8988944d9-k867h" [a9885e81-d5f8-4422-84fd-94def24ba0cf] Running
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 6.03271401s
addons_test.go:417: (dbg) Run:  kubectl --context addons-288312 top pods -n kube-system
addons_test.go:434: (dbg) Run:  out/minikube-linux-arm64 -p addons-288312 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (7.03s)

                                                
                                    
x
+
TestAddons/parallel/CSI (36.24s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:567: csi-hostpath-driver pods stabilized in 8.742639ms
addons_test.go:570: (dbg) Run:  kubectl --context addons-288312 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:575: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-288312 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-288312 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-288312 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:580: (dbg) Run:  kubectl --context addons-288312 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:585: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [a20571ac-9d59-446f-a0ad-3c744112dd78] Pending
helpers_test.go:344: "task-pv-pod" [a20571ac-9d59-446f-a0ad-3c744112dd78] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [a20571ac-9d59-446f-a0ad-3c744112dd78] Running
addons_test.go:585: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 10.004505112s
addons_test.go:590: (dbg) Run:  kubectl --context addons-288312 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:595: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-288312 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-288312 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:600: (dbg) Run:  kubectl --context addons-288312 delete pod task-pv-pod
addons_test.go:600: (dbg) Done: kubectl --context addons-288312 delete pod task-pv-pod: (1.294409433s)
addons_test.go:606: (dbg) Run:  kubectl --context addons-288312 delete pvc hpvc
addons_test.go:612: (dbg) Run:  kubectl --context addons-288312 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:617: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-288312 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-288312 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-288312 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-288312 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-288312 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-288312 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:622: (dbg) Run:  kubectl --context addons-288312 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:627: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [55138ee5-31b1-4a74-a893-35399e186806] Pending
helpers_test.go:344: "task-pv-pod-restore" [55138ee5-31b1-4a74-a893-35399e186806] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [55138ee5-31b1-4a74-a893-35399e186806] Running
addons_test.go:627: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.003431689s
addons_test.go:632: (dbg) Run:  kubectl --context addons-288312 delete pod task-pv-pod-restore
addons_test.go:636: (dbg) Run:  kubectl --context addons-288312 delete pvc hpvc-restore
addons_test.go:640: (dbg) Run:  kubectl --context addons-288312 delete volumesnapshot new-snapshot-demo
addons_test.go:644: (dbg) Run:  out/minikube-linux-arm64 -p addons-288312 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:644: (dbg) Done: out/minikube-linux-arm64 -p addons-288312 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.786688754s)
addons_test.go:648: (dbg) Run:  out/minikube-linux-arm64 -p addons-288312 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:648: (dbg) Done: out/minikube-linux-arm64 -p addons-288312 addons disable volumesnapshots --alsologtostderr -v=1: (1.059046411s)
--- PASS: TestAddons/parallel/CSI (36.24s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (17s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:830: (dbg) Run:  out/minikube-linux-arm64 addons enable headlamp -p addons-288312 --alsologtostderr -v=1
addons_test.go:830: (dbg) Done: out/minikube-linux-arm64 addons enable headlamp -p addons-288312 --alsologtostderr -v=1: (1.189941039s)
addons_test.go:835: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-57fb76fcdb-9tzhb" [cc2fa959-86be-4975-ada7-c19a8d289c5a] Pending
helpers_test.go:344: "headlamp-57fb76fcdb-9tzhb" [cc2fa959-86be-4975-ada7-c19a8d289c5a] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-57fb76fcdb-9tzhb" [cc2fa959-86be-4975-ada7-c19a8d289c5a] Running
addons_test.go:835: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 10.003886456s
addons_test.go:839: (dbg) Run:  out/minikube-linux-arm64 -p addons-288312 addons disable headlamp --alsologtostderr -v=1
addons_test.go:839: (dbg) Done: out/minikube-linux-arm64 -p addons-288312 addons disable headlamp --alsologtostderr -v=1: (5.805493131s)
--- PASS: TestAddons/parallel/Headlamp (17.00s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (6.93s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:867: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-c4bc9b5f8-p24hn" [7ceb1fae-8823-4b37-8ce7-d1d33a0d1e7b] Running
addons_test.go:867: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 6.011250836s
addons_test.go:870: (dbg) Run:  out/minikube-linux-arm64 addons disable cloud-spanner -p addons-288312
--- PASS: TestAddons/parallel/CloudSpanner (6.93s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (53.13s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:982: (dbg) Run:  kubectl --context addons-288312 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:988: (dbg) Run:  kubectl --context addons-288312 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:992: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-288312 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-288312 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-288312 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-288312 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-288312 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-288312 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:995: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [7acf0bb9-39a8-42df-94e2-340405ddbb59] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [7acf0bb9-39a8-42df-94e2-340405ddbb59] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [7acf0bb9-39a8-42df-94e2-340405ddbb59] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:995: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 3.003621119s
addons_test.go:1000: (dbg) Run:  kubectl --context addons-288312 get pvc test-pvc -o=json
addons_test.go:1009: (dbg) Run:  out/minikube-linux-arm64 -p addons-288312 ssh "cat /opt/local-path-provisioner/pvc-e4310871-41fc-4b3c-adab-30acd012f2a9_default_test-pvc/file1"
addons_test.go:1021: (dbg) Run:  kubectl --context addons-288312 delete pod test-local-path
addons_test.go:1025: (dbg) Run:  kubectl --context addons-288312 delete pvc test-pvc
addons_test.go:1029: (dbg) Run:  out/minikube-linux-arm64 -p addons-288312 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1029: (dbg) Done: out/minikube-linux-arm64 -p addons-288312 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.866122852s)
--- PASS: TestAddons/parallel/LocalPath (53.13s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.56s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1061: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-892v8" [e61326b6-6a52-44ff-be2e-4479f137b093] Running
addons_test.go:1061: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.004260197s
addons_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 addons disable nvidia-device-plugin -p addons-288312
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.56s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (12.03s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1072: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-67d98fc6b-snwj8" [6891c7b4-09af-46ee-bbeb-6da37fd664e7] Running
addons_test.go:1072: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.003994505s
addons_test.go:1076: (dbg) Run:  out/minikube-linux-arm64 -p addons-288312 addons disable yakd --alsologtostderr -v=1
addons_test.go:1076: (dbg) Done: out/minikube-linux-arm64 -p addons-288312 addons disable yakd --alsologtostderr -v=1: (6.024126507s)
--- PASS: TestAddons/parallel/Yakd (12.03s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.22s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:174: (dbg) Run:  out/minikube-linux-arm64 stop -p addons-288312
addons_test.go:174: (dbg) Done: out/minikube-linux-arm64 stop -p addons-288312: (11.956730237s)
addons_test.go:178: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-288312
addons_test.go:182: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-288312
addons_test.go:187: (dbg) Run:  out/minikube-linux-arm64 addons disable gvisor -p addons-288312
--- PASS: TestAddons/StoppedEnableDisable (12.22s)

                                                
                                    
x
+
TestCertOptions (39.51s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-arm64 start -p cert-options-058229 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd
cert_options_test.go:49: (dbg) Done: out/minikube-linux-arm64 start -p cert-options-058229 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd: (36.825486574s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-arm64 -p cert-options-058229 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-058229 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-arm64 ssh -p cert-options-058229 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-058229" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-options-058229
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-options-058229: (2.005940604s)
--- PASS: TestCertOptions (39.51s)

                                                
                                    
x
+
TestCertExpiration (232.05s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-553371 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=containerd
cert_options_test.go:123: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-553371 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=containerd: (41.453718694s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-553371 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=containerd
cert_options_test.go:131: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-553371 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=containerd: (8.320135727s)
helpers_test.go:175: Cleaning up "cert-expiration-553371" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-expiration-553371
E0819 12:19:15.859063  299191 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19476-293809/.minikube/profiles/functional-970286/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-expiration-553371: (2.276054271s)
--- PASS: TestCertExpiration (232.05s)

                                                
                                    
x
+
TestForceSystemdFlag (40.14s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-flag-037321 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
docker_test.go:91: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-flag-037321 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (37.619366205s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-flag-037321 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-flag-037321" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-flag-037321
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-flag-037321: (2.072782173s)
--- PASS: TestForceSystemdFlag (40.14s)

                                                
                                    
x
+
TestForceSystemdEnv (39.49s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-env-759874 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
docker_test.go:155: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-env-759874 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (36.646527892s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-env-759874 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-env-759874" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-env-759874
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-env-759874: (2.510585776s)
--- PASS: TestForceSystemdEnv (39.49s)

                                                
                                    
x
+
TestDockerEnvContainerd (43.66s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with containerd true linux arm64
docker_test.go:181: (dbg) Run:  out/minikube-linux-arm64 start -p dockerenv-453149 --driver=docker  --container-runtime=containerd
docker_test.go:181: (dbg) Done: out/minikube-linux-arm64 start -p dockerenv-453149 --driver=docker  --container-runtime=containerd: (28.160403148s)
docker_test.go:189: (dbg) Run:  /bin/bash -c "out/minikube-linux-arm64 docker-env --ssh-host --ssh-add -p dockerenv-453149"
docker_test.go:220: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-Vim0OTiveUPf/agent.318382" SSH_AGENT_PID="318383" DOCKER_HOST=ssh://docker@127.0.0.1:33143 docker version"
docker_test.go:243: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-Vim0OTiveUPf/agent.318382" SSH_AGENT_PID="318383" DOCKER_HOST=ssh://docker@127.0.0.1:33143 DOCKER_BUILDKIT=0 docker build -t local/minikube-dockerenv-containerd-test:latest testdata/docker-env"
docker_test.go:243: (dbg) Done: /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-Vim0OTiveUPf/agent.318382" SSH_AGENT_PID="318383" DOCKER_HOST=ssh://docker@127.0.0.1:33143 DOCKER_BUILDKIT=0 docker build -t local/minikube-dockerenv-containerd-test:latest testdata/docker-env": (1.084736685s)
docker_test.go:250: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-Vim0OTiveUPf/agent.318382" SSH_AGENT_PID="318383" DOCKER_HOST=ssh://docker@127.0.0.1:33143 docker image ls"
docker_test.go:250: (dbg) Done: /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-Vim0OTiveUPf/agent.318382" SSH_AGENT_PID="318383" DOCKER_HOST=ssh://docker@127.0.0.1:33143 docker image ls": (1.010134592s)
helpers_test.go:175: Cleaning up "dockerenv-453149" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p dockerenv-453149
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p dockerenv-453149: (1.942774952s)
--- PASS: TestDockerEnvContainerd (43.66s)

                                                
                                    
x
+
TestErrorSpam/setup (34.75s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -p nospam-750435 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-750435 --driver=docker  --container-runtime=containerd
error_spam_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -p nospam-750435 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-750435 --driver=docker  --container-runtime=containerd: (34.754356106s)
--- PASS: TestErrorSpam/setup (34.75s)

                                                
                                    
x
+
TestErrorSpam/start (0.76s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-750435 --log_dir /tmp/nospam-750435 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-750435 --log_dir /tmp/nospam-750435 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-750435 --log_dir /tmp/nospam-750435 start --dry-run
--- PASS: TestErrorSpam/start (0.76s)

                                                
                                    
x
+
TestErrorSpam/status (1.13s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-750435 --log_dir /tmp/nospam-750435 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-750435 --log_dir /tmp/nospam-750435 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-750435 --log_dir /tmp/nospam-750435 status
--- PASS: TestErrorSpam/status (1.13s)

                                                
                                    
x
+
TestErrorSpam/pause (1.89s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-750435 --log_dir /tmp/nospam-750435 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-750435 --log_dir /tmp/nospam-750435 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-750435 --log_dir /tmp/nospam-750435 pause
--- PASS: TestErrorSpam/pause (1.89s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.82s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-750435 --log_dir /tmp/nospam-750435 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-750435 --log_dir /tmp/nospam-750435 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-750435 --log_dir /tmp/nospam-750435 unpause
--- PASS: TestErrorSpam/unpause (1.82s)

                                                
                                    
x
+
TestErrorSpam/stop (1.49s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-750435 --log_dir /tmp/nospam-750435 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-arm64 -p nospam-750435 --log_dir /tmp/nospam-750435 stop: (1.300026308s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-750435 --log_dir /tmp/nospam-750435 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-750435 --log_dir /tmp/nospam-750435 stop
--- PASS: TestErrorSpam/stop (1.49s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1855: local sync path: /home/jenkins/minikube-integration/19476-293809/.minikube/files/etc/test/nested/copy/299191/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (52.81s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2234: (dbg) Run:  out/minikube-linux-arm64 start -p functional-970286 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd
functional_test.go:2234: (dbg) Done: out/minikube-linux-arm64 start -p functional-970286 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd: (52.810269916s)
--- PASS: TestFunctional/serial/StartWithProxy (52.81s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (6.82s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:659: (dbg) Run:  out/minikube-linux-arm64 start -p functional-970286 --alsologtostderr -v=8
functional_test.go:659: (dbg) Done: out/minikube-linux-arm64 start -p functional-970286 --alsologtostderr -v=8: (6.814881592s)
functional_test.go:663: soft start took 6.818360839s for "functional-970286" cluster.
--- PASS: TestFunctional/serial/SoftStart (6.82s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:681: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.07s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:696: (dbg) Run:  kubectl --context functional-970286 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.09s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (4.01s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1049: (dbg) Run:  out/minikube-linux-arm64 -p functional-970286 cache add registry.k8s.io/pause:3.1
functional_test.go:1049: (dbg) Done: out/minikube-linux-arm64 -p functional-970286 cache add registry.k8s.io/pause:3.1: (1.479216369s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-arm64 -p functional-970286 cache add registry.k8s.io/pause:3.3
functional_test.go:1049: (dbg) Done: out/minikube-linux-arm64 -p functional-970286 cache add registry.k8s.io/pause:3.3: (1.359383814s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-arm64 -p functional-970286 cache add registry.k8s.io/pause:latest
functional_test.go:1049: (dbg) Done: out/minikube-linux-arm64 -p functional-970286 cache add registry.k8s.io/pause:latest: (1.172157221s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (4.01s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.29s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1077: (dbg) Run:  docker build -t minikube-local-cache-test:functional-970286 /tmp/TestFunctionalserialCacheCmdcacheadd_local387270861/001
functional_test.go:1089: (dbg) Run:  out/minikube-linux-arm64 -p functional-970286 cache add minikube-local-cache-test:functional-970286
functional_test.go:1094: (dbg) Run:  out/minikube-linux-arm64 -p functional-970286 cache delete minikube-local-cache-test:functional-970286
functional_test.go:1083: (dbg) Run:  docker rmi minikube-local-cache-test:functional-970286
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.29s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1102: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1110: (dbg) Run:  out/minikube-linux-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.31s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1124: (dbg) Run:  out/minikube-linux-arm64 -p functional-970286 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.31s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (2.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1147: (dbg) Run:  out/minikube-linux-arm64 -p functional-970286 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Run:  out/minikube-linux-arm64 -p functional-970286 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-970286 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (307.386144ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1158: (dbg) Run:  out/minikube-linux-arm64 -p functional-970286 cache reload
functional_test.go:1158: (dbg) Done: out/minikube-linux-arm64 -p functional-970286 cache reload: (1.115771062s)
functional_test.go:1163: (dbg) Run:  out/minikube-linux-arm64 -p functional-970286 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (2.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1172: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1172: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.11s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:716: (dbg) Run:  out/minikube-linux-arm64 -p functional-970286 kubectl -- --context functional-970286 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.14s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:741: (dbg) Run:  out/kubectl --context functional-970286 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.14s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (45.45s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:757: (dbg) Run:  out/minikube-linux-arm64 start -p functional-970286 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:757: (dbg) Done: out/minikube-linux-arm64 start -p functional-970286 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (45.451488641s)
functional_test.go:761: restart took 45.451598833s for "functional-970286" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (45.45s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:810: (dbg) Run:  kubectl --context functional-970286 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:825: etcd phase: Running
functional_test.go:835: etcd status: Ready
functional_test.go:825: kube-apiserver phase: Running
functional_test.go:835: kube-apiserver status: Ready
functional_test.go:825: kube-controller-manager phase: Running
functional_test.go:835: kube-controller-manager status: Ready
functional_test.go:825: kube-scheduler phase: Running
functional_test.go:835: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.11s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.79s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1236: (dbg) Run:  out/minikube-linux-arm64 -p functional-970286 logs
functional_test.go:1236: (dbg) Done: out/minikube-linux-arm64 -p functional-970286 logs: (1.792709502s)
--- PASS: TestFunctional/serial/LogsCmd (1.79s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.79s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1250: (dbg) Run:  out/minikube-linux-arm64 -p functional-970286 logs --file /tmp/TestFunctionalserialLogsFileCmd4145768134/001/logs.txt
functional_test.go:1250: (dbg) Done: out/minikube-linux-arm64 -p functional-970286 logs --file /tmp/TestFunctionalserialLogsFileCmd4145768134/001/logs.txt: (1.7917405s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.79s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.26s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2321: (dbg) Run:  kubectl --context functional-970286 apply -f testdata/invalidsvc.yaml
functional_test.go:2335: (dbg) Run:  out/minikube-linux-arm64 service invalid-svc -p functional-970286
functional_test.go:2335: (dbg) Non-zero exit: out/minikube-linux-arm64 service invalid-svc -p functional-970286: exit status 115 (426.563799ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|---------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL            |
	|-----------|-------------|-------------|---------------------------|
	| default   | invalid-svc |          80 | http://192.168.49.2:31265 |
	|-----------|-------------|-------------|---------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2327: (dbg) Run:  kubectl --context functional-970286 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.26s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-970286 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-970286 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-970286 config get cpus: exit status 14 (86.155804ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-970286 config set cpus 2
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-970286 config get cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-970286 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-970286 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-970286 config get cpus: exit status 14 (101.089126ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (11.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:905: (dbg) daemon: [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-970286 --alsologtostderr -v=1]
functional_test.go:910: (dbg) stopping [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-970286 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 333134: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (11.03s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:974: (dbg) Run:  out/minikube-linux-arm64 start -p functional-970286 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd
functional_test.go:974: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-970286 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd: exit status 23 (179.217356ms)

                                                
                                                
-- stdout --
	* [functional-970286] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19476
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19476-293809/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19476-293809/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 11:44:46.950028  332826 out.go:345] Setting OutFile to fd 1 ...
	I0819 11:44:46.950220  332826 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 11:44:46.950250  332826 out.go:358] Setting ErrFile to fd 2...
	I0819 11:44:46.950272  332826 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 11:44:46.950506  332826 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19476-293809/.minikube/bin
	I0819 11:44:46.951035  332826 out.go:352] Setting JSON to false
	I0819 11:44:46.952136  332826 start.go:129] hostinfo: {"hostname":"ip-172-31-30-239","uptime":5234,"bootTime":1724062653,"procs":211,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I0819 11:44:46.952234  332826 start.go:139] virtualization:  
	I0819 11:44:46.954591  332826 out.go:177] * [functional-970286] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	I0819 11:44:46.957385  332826 out.go:177]   - MINIKUBE_LOCATION=19476
	I0819 11:44:46.957536  332826 notify.go:220] Checking for updates...
	I0819 11:44:46.961099  332826 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0819 11:44:46.962981  332826 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19476-293809/kubeconfig
	I0819 11:44:46.964789  332826 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19476-293809/.minikube
	I0819 11:44:46.966499  332826 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0819 11:44:46.968597  332826 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0819 11:44:46.971327  332826 config.go:182] Loaded profile config "functional-970286": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.0
	I0819 11:44:46.971831  332826 driver.go:392] Setting default libvirt URI to qemu:///system
	I0819 11:44:46.999813  332826 docker.go:123] docker version: linux-27.1.2:Docker Engine - Community
	I0819 11:44:46.999948  332826 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0819 11:44:47.061131  332826 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:32 OomKillDisable:true NGoroutines:52 SystemTime:2024-08-19 11:44:47.051706679 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1067-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214900736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:27.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8fc6bcff51318944179630522a095cc9dbf9f353 Expected:8fc6bcff51318944179630522a095cc9dbf9f353} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1]] Warnings:<nil>}}
	I0819 11:44:47.061258  332826 docker.go:307] overlay module found
	I0819 11:44:47.064170  332826 out.go:177] * Using the docker driver based on existing profile
	I0819 11:44:47.066078  332826 start.go:297] selected driver: docker
	I0819 11:44:47.066107  332826 start.go:901] validating driver "docker" against &{Name:functional-970286 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:functional-970286 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.31.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:do
cker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 11:44:47.066260  332826 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0819 11:44:47.069092  332826 out.go:201] 
	W0819 11:44:47.071132  332826 out.go:270] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0819 11:44:47.073198  332826 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:991: (dbg) Run:  out/minikube-linux-arm64 start -p functional-970286 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
--- PASS: TestFunctional/parallel/DryRun (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1020: (dbg) Run:  out/minikube-linux-arm64 start -p functional-970286 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd
functional_test.go:1020: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-970286 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd: exit status 23 (184.564373ms)

                                                
                                                
-- stdout --
	* [functional-970286] minikube v1.33.1 sur Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19476
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19476-293809/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19476-293809/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 11:44:46.767810  332779 out.go:345] Setting OutFile to fd 1 ...
	I0819 11:44:46.767997  332779 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 11:44:46.768009  332779 out.go:358] Setting ErrFile to fd 2...
	I0819 11:44:46.768014  332779 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 11:44:46.768405  332779 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19476-293809/.minikube/bin
	I0819 11:44:46.768815  332779 out.go:352] Setting JSON to false
	I0819 11:44:46.769839  332779 start.go:129] hostinfo: {"hostname":"ip-172-31-30-239","uptime":5234,"bootTime":1724062653,"procs":211,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I0819 11:44:46.769915  332779 start.go:139] virtualization:  
	I0819 11:44:46.773401  332779 out.go:177] * [functional-970286] minikube v1.33.1 sur Ubuntu 20.04 (arm64)
	I0819 11:44:46.776651  332779 out.go:177]   - MINIKUBE_LOCATION=19476
	I0819 11:44:46.776843  332779 notify.go:220] Checking for updates...
	I0819 11:44:46.781512  332779 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0819 11:44:46.783595  332779 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19476-293809/kubeconfig
	I0819 11:44:46.785935  332779 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19476-293809/.minikube
	I0819 11:44:46.787970  332779 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0819 11:44:46.790134  332779 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0819 11:44:46.792676  332779 config.go:182] Loaded profile config "functional-970286": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.0
	I0819 11:44:46.793267  332779 driver.go:392] Setting default libvirt URI to qemu:///system
	I0819 11:44:46.816281  332779 docker.go:123] docker version: linux-27.1.2:Docker Engine - Community
	I0819 11:44:46.816405  332779 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0819 11:44:46.883808  332779 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:32 OomKillDisable:true NGoroutines:52 SystemTime:2024-08-19 11:44:46.870696369 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1067-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214900736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:27.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8fc6bcff51318944179630522a095cc9dbf9f353 Expected:8fc6bcff51318944179630522a095cc9dbf9f353} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1]] Warnings:<nil>}}
	I0819 11:44:46.883938  332779 docker.go:307] overlay module found
	I0819 11:44:46.886370  332779 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I0819 11:44:46.888793  332779 start.go:297] selected driver: docker
	I0819 11:44:46.888826  332779 start.go:901] validating driver "docker" against &{Name:functional-970286 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:functional-970286 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.31.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:do
cker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 11:44:46.888962  332779 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0819 11:44:46.891459  332779 out.go:201] 
	W0819 11:44:46.893550  332779 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0819 11:44:46.895610  332779 out.go:201] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:854: (dbg) Run:  out/minikube-linux-arm64 -p functional-970286 status
functional_test.go:860: (dbg) Run:  out/minikube-linux-arm64 -p functional-970286 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:872: (dbg) Run:  out/minikube-linux-arm64 -p functional-970286 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.20s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (10.74s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1627: (dbg) Run:  kubectl --context functional-970286 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1635: (dbg) Run:  kubectl --context functional-970286 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-65d86f57f4-w8qwt" [27b8d736-d951-4fcb-bb92-8ba15d1f378b] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-connect-65d86f57f4-w8qwt" [27b8d736-d951-4fcb-bb92-8ba15d1f378b] Running
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 10.004192957s
functional_test.go:1649: (dbg) Run:  out/minikube-linux-arm64 -p functional-970286 service hello-node-connect --url
functional_test.go:1655: found endpoint for hello-node-connect: http://192.168.49.2:32007
functional_test.go:1675: http://192.168.49.2:32007: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-65d86f57f4-w8qwt

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.49.2:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.49.2:32007
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (10.74s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1690: (dbg) Run:  out/minikube-linux-arm64 -p functional-970286 addons list
functional_test.go:1702: (dbg) Run:  out/minikube-linux-arm64 -p functional-970286 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (26.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [0af6895d-6983-45f9-a80c-f228fe4d6d57] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.004146645s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-970286 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-970286 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-970286 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-970286 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [60fca61d-2987-4c73-afe4-7786863deea4] Pending
helpers_test.go:344: "sp-pod" [60fca61d-2987-4c73-afe4-7786863deea4] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [60fca61d-2987-4c73-afe4-7786863deea4] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 11.00340567s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-970286 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-970286 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-970286 delete -f testdata/storage-provisioner/pod.yaml: (1.01049175s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-970286 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [40a009ca-7921-4a4a-8fd5-9fc1dfdcfefe] Pending
helpers_test.go:344: "sp-pod" [40a009ca-7921-4a4a-8fd5-9fc1dfdcfefe] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [40a009ca-7921-4a4a-8fd5-9fc1dfdcfefe] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.004727737s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-970286 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (26.08s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.72s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1725: (dbg) Run:  out/minikube-linux-arm64 -p functional-970286 ssh "echo hello"
functional_test.go:1742: (dbg) Run:  out/minikube-linux-arm64 -p functional-970286 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.72s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (2.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-970286 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-970286 ssh -n functional-970286 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-970286 cp functional-970286:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd2514837098/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-970286 ssh -n functional-970286 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-970286 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-970286 ssh -n functional-970286 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (2.39s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1929: Checking for existence of /etc/test/nested/copy/299191/hosts within VM
functional_test.go:1931: (dbg) Run:  out/minikube-linux-arm64 -p functional-970286 ssh "sudo cat /etc/test/nested/copy/299191/hosts"
functional_test.go:1936: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (2.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1972: Checking for existence of /etc/ssl/certs/299191.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-arm64 -p functional-970286 ssh "sudo cat /etc/ssl/certs/299191.pem"
functional_test.go:1972: Checking for existence of /usr/share/ca-certificates/299191.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-arm64 -p functional-970286 ssh "sudo cat /usr/share/ca-certificates/299191.pem"
functional_test.go:1972: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-arm64 -p functional-970286 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/2991912.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-arm64 -p functional-970286 ssh "sudo cat /etc/ssl/certs/2991912.pem"
functional_test.go:1999: Checking for existence of /usr/share/ca-certificates/2991912.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-arm64 -p functional-970286 ssh "sudo cat /usr/share/ca-certificates/2991912.pem"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-arm64 -p functional-970286 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (2.70s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:219: (dbg) Run:  kubectl --context functional-970286 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2027: (dbg) Run:  out/minikube-linux-arm64 -p functional-970286 ssh "sudo systemctl is-active docker"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-970286 ssh "sudo systemctl is-active docker": exit status 1 (260.840359ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2027: (dbg) Run:  out/minikube-linux-arm64 -p functional-970286 ssh "sudo systemctl is-active crio"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-970286 ssh "sudo systemctl is-active crio": exit status 1 (261.994993ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2288: (dbg) Run:  out/minikube-linux-arm64 license
--- PASS: TestFunctional/parallel/License (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-970286 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-970286 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-970286 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-970286 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 330422: os: process already finished
helpers_test.go:502: unable to terminate pid 330224: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.58s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-arm64 -p functional-970286 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (8.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-970286 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [072ae29b-0054-43e2-9a81-3cada0d28a29] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [072ae29b-0054-43e2-9a81-3cada0d28a29] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 8.004365407s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (8.48s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-970286 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.102.146.218 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-arm64 -p functional-970286 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (7.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1437: (dbg) Run:  kubectl --context functional-970286 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1445: (dbg) Run:  kubectl --context functional-970286 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-64b4f8f9ff-vlfxw" [eaf3cde3-0c97-445b-a0f7-a2dcb8c3638a] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-64b4f8f9ff-vlfxw" [eaf3cde3-0c97-445b-a0f7-a2dcb8c3638a] Running
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 7.006452444s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (7.21s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1270: (dbg) Run:  out/minikube-linux-arm64 profile lis
functional_test.go:1275: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1459: (dbg) Run:  out/minikube-linux-arm64 -p functional-970286 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.61s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1310: (dbg) Run:  out/minikube-linux-arm64 profile list
functional_test.go:1315: Took "389.408387ms" to run "out/minikube-linux-arm64 profile list"
functional_test.go:1324: (dbg) Run:  out/minikube-linux-arm64 profile list -l
functional_test.go:1329: Took "86.002353ms" to run "out/minikube-linux-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1489: (dbg) Run:  out/minikube-linux-arm64 -p functional-970286 service list -o json
functional_test.go:1494: Took "571.647144ms" to run "out/minikube-linux-arm64 -p functional-970286 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.57s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1361: (dbg) Run:  out/minikube-linux-arm64 profile list -o json
functional_test.go:1366: Took "398.975686ms" to run "out/minikube-linux-arm64 profile list -o json"
functional_test.go:1374: (dbg) Run:  out/minikube-linux-arm64 profile list -o json --light
functional_test.go:1379: Took "74.178866ms" to run "out/minikube-linux-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1509: (dbg) Run:  out/minikube-linux-arm64 -p functional-970286 service --namespace=default --https --url hello-node
functional_test.go:1522: found endpoint: https://192.168.49.2:31687
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.59s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (7.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-970286 /tmp/TestFunctionalparallelMountCmdany-port998305567/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1724067883929710489" to /tmp/TestFunctionalparallelMountCmdany-port998305567/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1724067883929710489" to /tmp/TestFunctionalparallelMountCmdany-port998305567/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1724067883929710489" to /tmp/TestFunctionalparallelMountCmdany-port998305567/001/test-1724067883929710489
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-970286 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-970286 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (420.464189ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-970286 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-arm64 -p functional-970286 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Aug 19 11:44 created-by-test
-rw-r--r-- 1 docker docker 24 Aug 19 11:44 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Aug 19 11:44 test-1724067883929710489
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-arm64 -p functional-970286 ssh cat /mount-9p/test-1724067883929710489
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-970286 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [623a08b0-f229-45ec-a837-03b889986517] Pending
helpers_test.go:344: "busybox-mount" [623a08b0-f229-45ec-a837-03b889986517] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [623a08b0-f229-45ec-a837-03b889986517] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [623a08b0-f229-45ec-a837-03b889986517] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 4.004727037s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-970286 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-970286 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-970286 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-arm64 -p functional-970286 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-970286 /tmp/TestFunctionalparallelMountCmdany-port998305567/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (7.56s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1540: (dbg) Run:  out/minikube-linux-arm64 -p functional-970286 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1559: (dbg) Run:  out/minikube-linux-arm64 -p functional-970286 service hello-node --url
functional_test.go:1565: found endpoint for hello-node: http://192.168.49.2:31687
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.65s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-970286 /tmp/TestFunctionalparallelMountCmdspecific-port1996749641/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-970286 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-970286 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (467.14435ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-970286 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-arm64 -p functional-970286 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-970286 /tmp/TestFunctionalparallelMountCmdspecific-port1996749641/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-arm64 -p functional-970286 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-970286 ssh "sudo umount -f /mount-9p": exit status 1 (322.631802ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-arm64 -p functional-970286 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-970286 /tmp/TestFunctionalparallelMountCmdspecific-port1996749641/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.15s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (2.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-970286 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3756726376/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-970286 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3756726376/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-970286 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3756726376/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-970286 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-970286 ssh "findmnt -T" /mount1: exit status 1 (1.008364542s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-970286 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-970286 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-970286 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-arm64 mount -p functional-970286 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-970286 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3756726376/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-970286 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3756726376/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-970286 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3756726376/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (2.49s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2256: (dbg) Run:  out/minikube-linux-arm64 -p functional-970286 version --short
--- PASS: TestFunctional/parallel/Version/short (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (1.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2270: (dbg) Run:  out/minikube-linux-arm64 -p functional-970286 version -o=json --components
functional_test.go:2270: (dbg) Done: out/minikube-linux-arm64 -p functional-970286 version -o=json --components: (1.315613265s)
--- PASS: TestFunctional/parallel/Version/components (1.32s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-970286 image ls --format short --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-970286 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.31.0
registry.k8s.io/kube-proxy:v1.31.0
registry.k8s.io/kube-controller-manager:v1.31.0
registry.k8s.io/kube-apiserver:v1.31.0
registry.k8s.io/etcd:3.5.15-0
registry.k8s.io/echoserver-arm:1.8
registry.k8s.io/coredns/coredns:v1.11.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/minikube-local-cache-test:functional-970286
docker.io/kindest/kindnetd:v20240813-c6f155d6
docker.io/kindest/kindnetd:v20240730-75a5af0c
docker.io/kicbase/echo-server:functional-970286
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-970286 image ls --format short --alsologtostderr:
I0819 11:45:04.404289  335734 out.go:345] Setting OutFile to fd 1 ...
I0819 11:45:04.404438  335734 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0819 11:45:04.404449  335734 out.go:358] Setting ErrFile to fd 2...
I0819 11:45:04.404469  335734 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0819 11:45:04.404748  335734 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19476-293809/.minikube/bin
I0819 11:45:04.405444  335734 config.go:182] Loaded profile config "functional-970286": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.0
I0819 11:45:04.405571  335734 config.go:182] Loaded profile config "functional-970286": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.0
I0819 11:45:04.406094  335734 cli_runner.go:164] Run: docker container inspect functional-970286 --format={{.State.Status}}
I0819 11:45:04.437852  335734 ssh_runner.go:195] Run: systemctl --version
I0819 11:45:04.437911  335734 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-970286
I0819 11:45:04.461017  335734 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33153 SSHKeyPath:/home/jenkins/minikube-integration/19476-293809/.minikube/machines/functional-970286/id_rsa Username:docker}
I0819 11:45:04.555656  335734 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-970286 image ls --format table --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-970286 image ls --format table --alsologtostderr:
|---------------------------------------------|--------------------|---------------|--------|
|                    Image                    |        Tag         |   Image ID    |  Size  |
|---------------------------------------------|--------------------|---------------|--------|
| docker.io/library/nginx                     | alpine             | sha256:70594c | 19.6MB |
| docker.io/library/nginx                     | latest             | sha256:a9dfdb | 67.7MB |
| registry.k8s.io/echoserver-arm              | 1.8                | sha256:72565b | 45.3MB |
| registry.k8s.io/etcd                        | 3.5.15-0           | sha256:27e383 | 66.5MB |
| registry.k8s.io/kube-proxy                  | v1.31.0            | sha256:71d55d | 26.8MB |
| registry.k8s.io/pause                       | 3.3                | sha256:3d1873 | 249kB  |
| docker.io/kindest/kindnetd                  | v20240730-75a5af0c | sha256:d5e283 | 33.3MB |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc       | sha256:1611cd | 1.94MB |
| registry.k8s.io/coredns/coredns             | v1.11.1            | sha256:2437cf | 16.5MB |
| registry.k8s.io/kube-scheduler              | v1.31.0            | sha256:fbbbd4 | 18.5MB |
| docker.io/kicbase/echo-server               | functional-970286  | sha256:ce2d2c | 2.17MB |
| gcr.io/k8s-minikube/storage-provisioner     | v5                 | sha256:ba04bb | 8.03MB |
| registry.k8s.io/kube-apiserver              | v1.31.0            | sha256:cd0f0a | 25.7MB |
| registry.k8s.io/kube-controller-manager     | v1.31.0            | sha256:fcb068 | 23.9MB |
| registry.k8s.io/pause                       | 3.10               | sha256:afb617 | 268kB  |
| docker.io/kindest/kindnetd                  | v20240813-c6f155d6 | sha256:6a23fa | 33.3MB |
| docker.io/library/minikube-local-cache-test | functional-970286  | sha256:1eb58d | 992B   |
| registry.k8s.io/pause                       | 3.1                | sha256:8057e0 | 262kB  |
| registry.k8s.io/pause                       | latest             | sha256:8cb209 | 71.3kB |
|---------------------------------------------|--------------------|---------------|--------|
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-970286 image ls --format table --alsologtostderr:
I0819 11:45:05.016255  335890 out.go:345] Setting OutFile to fd 1 ...
I0819 11:45:05.016911  335890 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0819 11:45:05.016922  335890 out.go:358] Setting ErrFile to fd 2...
I0819 11:45:05.016928  335890 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0819 11:45:05.017362  335890 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19476-293809/.minikube/bin
I0819 11:45:05.018135  335890 config.go:182] Loaded profile config "functional-970286": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.0
I0819 11:45:05.018275  335890 config.go:182] Loaded profile config "functional-970286": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.0
I0819 11:45:05.018837  335890 cli_runner.go:164] Run: docker container inspect functional-970286 --format={{.State.Status}}
I0819 11:45:05.041345  335890 ssh_runner.go:195] Run: systemctl --version
I0819 11:45:05.041396  335890 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-970286
I0819 11:45:05.063072  335890 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33153 SSHKeyPath:/home/jenkins/minikube-integration/19476-293809/.minikube/machines/functional-970286/id_rsa Username:docker}
I0819 11:45:05.159872  335890 ssh_runner.go:195] Run: sudo crictl images --output json
E0819 11:45:07.266298  299191 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19476-293809/.minikube/profiles/addons-288312/client.crt: no such file or directory" logger="UnhandledError"
E0819 11:45:07.273314  299191 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19476-293809/.minikube/profiles/addons-288312/client.crt: no such file or directory" logger="UnhandledError"
E0819 11:45:07.284736  299191 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19476-293809/.minikube/profiles/addons-288312/client.crt: no such file or directory" logger="UnhandledError"
E0819 11:45:07.306192  299191 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19476-293809/.minikube/profiles/addons-288312/client.crt: no such file or directory" logger="UnhandledError"
E0819 11:45:07.347624  299191 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19476-293809/.minikube/profiles/addons-288312/client.crt: no such file or directory" logger="UnhandledError"
E0819 11:45:07.429100  299191 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19476-293809/.minikube/profiles/addons-288312/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-970286 image ls --format json --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-970286 image ls --format json --alsologtostderr:
[{"id":"sha256:8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"262191"},{"id":"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93","repoDigests":["registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1"],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.1"],"size":"16482581"},{"id":"sha256:cd0f0ae0ec9e0cdc092079156c122bf034ba3f24d31c1b1dd1b52a42ecf9b388","repoDigests":["registry.k8s.io/kube-apiserver@sha256:470179274deb9dc3a81df55cfc24823ce153147d4ebf2ed649a4f271f51eaddf"],"repoTags":["registry.k8s.io/kube-apiserver:v1.31.0"],"size":"25688321"},{"id":"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8","repoDigests":["registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"],"repoTags":["registry.k8s.io/pause:3.10"],"size":"267933"},{"id":"sha256:3d18732f8686cc3c878055d99a05fa80289502fa496b3
6b6a0fe0f77206a7300","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"249461"},{"id":"sha256:a9dfdba8b719078c5705fdecd6f8315765cc79e473111aa9451551ddc340b2bc","repoDigests":["docker.io/library/nginx@sha256:447a8665cc1dab95b1ca778e162215839ccbb9189104c79d7ec3a81e14577add"],"repoTags":["docker.io/library/nginx:latest"],"size":"67690150"},{"id":"sha256:fcb0683e6bdbd083710cf2d6fd7eb699c77fe4994c38a5c82d059e2e3cb4c2fd","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:f6f3c33dda209e8434b83dacf5244c03b59b0018d93325ff21296a142b68497d"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.31.0"],"size":"23947353"},{"id":"sha256:71d55d66fd4eec8986225089a135fadd96bc6624d987096808772ce1e1924d89","repoDigests":["registry.k8s.io/kube-proxy@sha256:c727efb1c6f15a68060bf7f207f5c7a765355b7e3340c513e582ec819c5cd2fe"],"repoTags":["registry.k8s.io/kube-proxy:v1.31.0"],"size":"26752334"},{"id":"sha256:fbbbd428abb4dae52ab3018797d00d5840a739f0cc5697b662791831a60b0adb","repoDigests":["registry.k8s.io
/kube-scheduler@sha256:96ddae9c9b2e79342e0551e2d2ec422c0c02629a74d928924aaa069706619808"],"repoTags":["registry.k8s.io/kube-scheduler:v1.31.0"],"size":"18505843"},{"id":"sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17","repoDigests":[],"repoTags":["docker.io/kicbase/echo-server:functional-970286"],"size":"2173567"},{"id":"sha256:d5e283bc63d431d0446af8b48a1618696def3b777347a97b8b3553d2c989c806","repoDigests":["docker.io/kindest/kindnetd@sha256:4067b91686869e19bac601aec305ba55d2e74cdcb91347869bfb4fd3a26cd3c3"],"repoTags":["docker.io/kindest/kindnetd:v20240730-75a5af0c"],"size":"33305789"},{"id":"sha256:6a23fa8fd2b78ab58e42ba273808edc936a9c53d8ac4a919f6337be094843a51","repoDigests":["docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166"],"repoTags":["docker.io/kindest/kindnetd:v20240813-c6f155d6"],"size":"33309097"},{"id":"sha256:70594c812316a9bc20dd5d679982c6322dc7cf0128687ae9f849d0207783e753","repoDigests":["docker.io/library/nginx@sha256:c0
4c18adc2a407740a397c8407c011fc6c90026a9b65cceddef7ae5484360158"],"repoTags":["docker.io/library/nginx:alpine"],"size":"19627164"},{"id":"sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"8034419"},{"id":"sha256:72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb","repoDigests":["registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5"],"repoTags":["registry.k8s.io/echoserver-arm:1.8"],"size":"45324675"},{"id":"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da","repoDigests":["registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"],"repoTags":["registry.k8s.io/etcd:3.5.15-0"],"size":"66535646"},{"id":"sha256:8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","re
poDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"71300"},{"id":"sha256:1eb58d4b9e54d35ea6abe43b3488d3d95a341cce6263f55f4ca0655b6858e37c","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-970286"],"size":"992"},{"id":"sha256:20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"],"repoTags":[],"size":"74084559"},{"id":"sha256:a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"18306114"},{"id":"sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"1935750"}]
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-970286 image ls --format json --alsologtostderr:
I0819 11:45:04.716895  335802 out.go:345] Setting OutFile to fd 1 ...
I0819 11:45:04.717096  335802 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0819 11:45:04.717123  335802 out.go:358] Setting ErrFile to fd 2...
I0819 11:45:04.717143  335802 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0819 11:45:04.717485  335802 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19476-293809/.minikube/bin
I0819 11:45:04.718355  335802 config.go:182] Loaded profile config "functional-970286": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.0
I0819 11:45:04.718550  335802 config.go:182] Loaded profile config "functional-970286": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.0
I0819 11:45:04.719127  335802 cli_runner.go:164] Run: docker container inspect functional-970286 --format={{.State.Status}}
I0819 11:45:04.743513  335802 ssh_runner.go:195] Run: systemctl --version
I0819 11:45:04.743573  335802 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-970286
I0819 11:45:04.780557  335802 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33153 SSHKeyPath:/home/jenkins/minikube-integration/19476-293809/.minikube/machines/functional-970286/id_rsa Username:docker}
I0819 11:45:04.895405  335802 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-970286 image ls --format yaml --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-970286 image ls --format yaml --alsologtostderr:
- id: sha256:6a23fa8fd2b78ab58e42ba273808edc936a9c53d8ac4a919f6337be094843a51
repoDigests:
- docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166
repoTags:
- docker.io/kindest/kindnetd:v20240813-c6f155d6
size: "33309097"
- id: sha256:1eb58d4b9e54d35ea6abe43b3488d3d95a341cce6263f55f4ca0655b6858e37c
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-970286
size: "992"
- id: sha256:20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
repoTags: []
size: "74084559"
- id: sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da
repoDigests:
- registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a
repoTags:
- registry.k8s.io/etcd:3.5.15-0
size: "66535646"
- id: sha256:cd0f0ae0ec9e0cdc092079156c122bf034ba3f24d31c1b1dd1b52a42ecf9b388
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:470179274deb9dc3a81df55cfc24823ce153147d4ebf2ed649a4f271f51eaddf
repoTags:
- registry.k8s.io/kube-apiserver:v1.31.0
size: "25688321"
- id: sha256:8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "71300"
- id: sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17
repoDigests: []
repoTags:
- docker.io/kicbase/echo-server:functional-970286
size: "2173567"
- id: sha256:d5e283bc63d431d0446af8b48a1618696def3b777347a97b8b3553d2c989c806
repoDigests:
- docker.io/kindest/kindnetd@sha256:4067b91686869e19bac601aec305ba55d2e74cdcb91347869bfb4fd3a26cd3c3
repoTags:
- docker.io/kindest/kindnetd:v20240730-75a5af0c
size: "33305789"
- id: sha256:a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "18306114"
- id: sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "8034419"
- id: sha256:71d55d66fd4eec8986225089a135fadd96bc6624d987096808772ce1e1924d89
repoDigests:
- registry.k8s.io/kube-proxy@sha256:c727efb1c6f15a68060bf7f207f5c7a765355b7e3340c513e582ec819c5cd2fe
repoTags:
- registry.k8s.io/kube-proxy:v1.31.0
size: "26752334"
- id: sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8
repoDigests:
- registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a
repoTags:
- registry.k8s.io/pause:3.10
size: "267933"
- id: sha256:3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "249461"
- id: sha256:70594c812316a9bc20dd5d679982c6322dc7cf0128687ae9f849d0207783e753
repoDigests:
- docker.io/library/nginx@sha256:c04c18adc2a407740a397c8407c011fc6c90026a9b65cceddef7ae5484360158
repoTags:
- docker.io/library/nginx:alpine
size: "19627164"
- id: sha256:a9dfdba8b719078c5705fdecd6f8315765cc79e473111aa9451551ddc340b2bc
repoDigests:
- docker.io/library/nginx@sha256:447a8665cc1dab95b1ca778e162215839ccbb9189104c79d7ec3a81e14577add
repoTags:
- docker.io/library/nginx:latest
size: "67690150"
- id: sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "1935750"
- id: sha256:72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb
repoDigests:
- registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5
repoTags:
- registry.k8s.io/echoserver-arm:1.8
size: "45324675"
- id: sha256:fcb0683e6bdbd083710cf2d6fd7eb699c77fe4994c38a5c82d059e2e3cb4c2fd
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:f6f3c33dda209e8434b83dacf5244c03b59b0018d93325ff21296a142b68497d
repoTags:
- registry.k8s.io/kube-controller-manager:v1.31.0
size: "23947353"
- id: sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.1
size: "16482581"
- id: sha256:fbbbd428abb4dae52ab3018797d00d5840a739f0cc5697b662791831a60b0adb
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:96ddae9c9b2e79342e0551e2d2ec422c0c02629a74d928924aaa069706619808
repoTags:
- registry.k8s.io/kube-scheduler:v1.31.0
size: "18505843"
- id: sha256:8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "262191"

                                                
                                                
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-970286 image ls --format yaml --alsologtostderr:
I0819 11:45:04.419530  335733 out.go:345] Setting OutFile to fd 1 ...
I0819 11:45:04.419759  335733 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0819 11:45:04.419789  335733 out.go:358] Setting ErrFile to fd 2...
I0819 11:45:04.419809  335733 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0819 11:45:04.420080  335733 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19476-293809/.minikube/bin
I0819 11:45:04.420755  335733 config.go:182] Loaded profile config "functional-970286": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.0
I0819 11:45:04.420923  335733 config.go:182] Loaded profile config "functional-970286": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.0
I0819 11:45:04.421505  335733 cli_runner.go:164] Run: docker container inspect functional-970286 --format={{.State.Status}}
I0819 11:45:04.444602  335733 ssh_runner.go:195] Run: systemctl --version
I0819 11:45:04.444657  335733 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-970286
I0819 11:45:04.476946  335733 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33153 SSHKeyPath:/home/jenkins/minikube-integration/19476-293809/.minikube/machines/functional-970286/id_rsa Username:docker}
I0819 11:45:04.568190  335733 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:308: (dbg) Run:  out/minikube-linux-arm64 -p functional-970286 ssh pgrep buildkitd
functional_test.go:308: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-970286 ssh pgrep buildkitd: exit status 1 (350.295222ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:315: (dbg) Run:  out/minikube-linux-arm64 -p functional-970286 image build -t localhost/my-image:functional-970286 testdata/build --alsologtostderr
functional_test.go:315: (dbg) Done: out/minikube-linux-arm64 -p functional-970286 image build -t localhost/my-image:functional-970286 testdata/build --alsologtostderr: (2.597336767s)
functional_test.go:323: (dbg) Stderr: out/minikube-linux-arm64 -p functional-970286 image build -t localhost/my-image:functional-970286 testdata/build --alsologtostderr:
I0819 11:45:05.036073  335895 out.go:345] Setting OutFile to fd 1 ...
I0819 11:45:05.036693  335895 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0819 11:45:05.036715  335895 out.go:358] Setting ErrFile to fd 2...
I0819 11:45:05.036723  335895 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0819 11:45:05.037044  335895 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19476-293809/.minikube/bin
I0819 11:45:05.038118  335895 config.go:182] Loaded profile config "functional-970286": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.0
I0819 11:45:05.041319  335895 config.go:182] Loaded profile config "functional-970286": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.0
I0819 11:45:05.041904  335895 cli_runner.go:164] Run: docker container inspect functional-970286 --format={{.State.Status}}
I0819 11:45:05.063266  335895 ssh_runner.go:195] Run: systemctl --version
I0819 11:45:05.063369  335895 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-970286
I0819 11:45:05.095756  335895 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33153 SSHKeyPath:/home/jenkins/minikube-integration/19476-293809/.minikube/machines/functional-970286/id_rsa Username:docker}
I0819 11:45:05.201719  335895 build_images.go:161] Building image from path: /tmp/build.3342322000.tar
I0819 11:45:05.201798  335895 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0819 11:45:05.213251  335895 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.3342322000.tar
I0819 11:45:05.222198  335895 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.3342322000.tar: stat -c "%s %y" /var/lib/minikube/build/build.3342322000.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.3342322000.tar': No such file or directory
I0819 11:45:05.222236  335895 ssh_runner.go:362] scp /tmp/build.3342322000.tar --> /var/lib/minikube/build/build.3342322000.tar (3072 bytes)
I0819 11:45:05.249420  335895 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.3342322000
I0819 11:45:05.259134  335895 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.3342322000 -xf /var/lib/minikube/build/build.3342322000.tar
I0819 11:45:05.270252  335895 containerd.go:394] Building image: /var/lib/minikube/build/build.3342322000
I0819 11:45:05.270366  335895 ssh_runner.go:195] Run: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.3342322000 --local dockerfile=/var/lib/minikube/build/build.3342322000 --output type=image,name=localhost/my-image:functional-970286
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 0.7s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.0s done
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 828.50kB / 828.50kB 0.2s
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 828.50kB / 828.50kB 0.2s done
#5 extracting sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0.1s done
#5 DONE 0.4s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.7s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.1s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.1s done
#8 exporting manifest sha256:e2bb1705c44f34b252a1d8daab9f31d551d80d04b59e5bab57c2a9e0a6045429
#8 exporting manifest sha256:e2bb1705c44f34b252a1d8daab9f31d551d80d04b59e5bab57c2a9e0a6045429 0.0s done
#8 exporting config sha256:dd96f29cc59cf9609e19d716ce6785b4d1ab206162bf5bbcf1de6c19fb238297 0.0s done
#8 naming to localhost/my-image:functional-970286 done
#8 DONE 0.1s
I0819 11:45:07.529235  335895 ssh_runner.go:235] Completed: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.3342322000 --local dockerfile=/var/lib/minikube/build/build.3342322000 --output type=image,name=localhost/my-image:functional-970286: (2.258825344s)
I0819 11:45:07.529309  335895 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.3342322000
I0819 11:45:07.538838  335895 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.3342322000.tar
I0819 11:45:07.549014  335895 build_images.go:217] Built localhost/my-image:functional-970286 from /tmp/build.3342322000.tar
I0819 11:45:07.549043  335895 build_images.go:133] succeeded building to: functional-970286
I0819 11:45:07.549048  335895 build_images.go:134] failed building to: 
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-970286 image ls
E0819 11:45:07.591131  299191 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19476-293809/.minikube/profiles/addons-288312/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.19s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:342: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:347: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-970286
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.64s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:355: (dbg) Run:  out/minikube-linux-arm64 -p functional-970286 image load --daemon kicbase/echo-server:functional-970286 --alsologtostderr
2024/08/19 11:44:58 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:355: (dbg) Done: out/minikube-linux-arm64 -p functional-970286 image load --daemon kicbase/echo-server:functional-970286 --alsologtostderr: (1.004037018s)
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-970286 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.33s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:365: (dbg) Run:  out/minikube-linux-arm64 -p functional-970286 image load --daemon kicbase/echo-server:functional-970286 --alsologtostderr
functional_test.go:365: (dbg) Done: out/minikube-linux-arm64 -p functional-970286 image load --daemon kicbase/echo-server:functional-970286 --alsologtostderr: (1.095052775s)
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-970286 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.63s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.73s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:235: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:240: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-970286
functional_test.go:245: (dbg) Run:  out/minikube-linux-arm64 -p functional-970286 image load --daemon kicbase/echo-server:functional-970286 --alsologtostderr
functional_test.go:245: (dbg) Done: out/minikube-linux-arm64 -p functional-970286 image load --daemon kicbase/echo-server:functional-970286 --alsologtostderr: (1.128757826s)
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-970286 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.73s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2119: (dbg) Run:  out/minikube-linux-arm64 -p functional-970286 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2119: (dbg) Run:  out/minikube-linux-arm64 -p functional-970286 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2119: (dbg) Run:  out/minikube-linux-arm64 -p functional-970286 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:380: (dbg) Run:  out/minikube-linux-arm64 -p functional-970286 image save kicbase/echo-server:functional-970286 /home/jenkins/workspace/Docker_Linux_containerd_arm64/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:392: (dbg) Run:  out/minikube-linux-arm64 -p functional-970286 image rm kicbase/echo-server:functional-970286 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-970286 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.56s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.75s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:409: (dbg) Run:  out/minikube-linux-arm64 -p functional-970286 image load /home/jenkins/workspace/Docker_Linux_containerd_arm64/echo-server-save.tar --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-970286 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.75s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:419: (dbg) Run:  docker rmi kicbase/echo-server:functional-970286
functional_test.go:424: (dbg) Run:  out/minikube-linux-arm64 -p functional-970286 image save --daemon kicbase/echo-server:functional-970286 --alsologtostderr
functional_test.go:432: (dbg) Run:  docker image inspect kicbase/echo-server:functional-970286
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.39s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-970286
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:198: (dbg) Run:  docker rmi -f localhost/my-image:functional-970286
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:206: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-970286
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (115.96s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-arm64 start -p ha-301725 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=containerd
E0819 11:45:12.399161  299191 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19476-293809/.minikube/profiles/addons-288312/client.crt: no such file or directory" logger="UnhandledError"
E0819 11:45:17.521308  299191 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19476-293809/.minikube/profiles/addons-288312/client.crt: no such file or directory" logger="UnhandledError"
E0819 11:45:27.763393  299191 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19476-293809/.minikube/profiles/addons-288312/client.crt: no such file or directory" logger="UnhandledError"
E0819 11:45:48.244792  299191 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19476-293809/.minikube/profiles/addons-288312/client.crt: no such file or directory" logger="UnhandledError"
E0819 11:46:29.206465  299191 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19476-293809/.minikube/profiles/addons-288312/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:101: (dbg) Done: out/minikube-linux-arm64 start -p ha-301725 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=containerd: (1m55.102687626s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-arm64 -p ha-301725 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (115.96s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (28.48s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-301725 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-301725 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-arm64 kubectl -p ha-301725 -- rollout status deployment/busybox: (25.409571737s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-301725 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-301725 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-301725 -- exec busybox-7dff88458-7fgsq -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-301725 -- exec busybox-7dff88458-j4t5v -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-301725 -- exec busybox-7dff88458-pbfhz -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-301725 -- exec busybox-7dff88458-7fgsq -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-301725 -- exec busybox-7dff88458-j4t5v -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-301725 -- exec busybox-7dff88458-pbfhz -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-301725 -- exec busybox-7dff88458-7fgsq -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-301725 -- exec busybox-7dff88458-j4t5v -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-301725 -- exec busybox-7dff88458-pbfhz -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (28.48s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.64s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-301725 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-301725 -- exec busybox-7dff88458-7fgsq -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-301725 -- exec busybox-7dff88458-7fgsq -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-301725 -- exec busybox-7dff88458-j4t5v -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-301725 -- exec busybox-7dff88458-j4t5v -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-301725 -- exec busybox-7dff88458-pbfhz -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-301725 -- exec busybox-7dff88458-pbfhz -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.64s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (24.96s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-arm64 node add -p ha-301725 -v=7 --alsologtostderr
E0819 11:47:51.128050  299191 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19476-293809/.minikube/profiles/addons-288312/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:228: (dbg) Done: out/minikube-linux-arm64 node add -p ha-301725 -v=7 --alsologtostderr: (23.785281889s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-arm64 -p ha-301725 status -v=7 --alsologtostderr
ha_test.go:234: (dbg) Done: out/minikube-linux-arm64 -p ha-301725 status -v=7 --alsologtostderr: (1.175357006s)
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (24.96s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.12s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-301725 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.12s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.77s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.77s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (19.14s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-linux-arm64 -p ha-301725 status --output json -v=7 --alsologtostderr
ha_test.go:326: (dbg) Done: out/minikube-linux-arm64 -p ha-301725 status --output json -v=7 --alsologtostderr: (1.021603568s)
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-301725 cp testdata/cp-test.txt ha-301725:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-301725 ssh -n ha-301725 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-301725 cp ha-301725:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2645352594/001/cp-test_ha-301725.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-301725 ssh -n ha-301725 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-301725 cp ha-301725:/home/docker/cp-test.txt ha-301725-m02:/home/docker/cp-test_ha-301725_ha-301725-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-301725 ssh -n ha-301725 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-301725 ssh -n ha-301725-m02 "sudo cat /home/docker/cp-test_ha-301725_ha-301725-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-301725 cp ha-301725:/home/docker/cp-test.txt ha-301725-m03:/home/docker/cp-test_ha-301725_ha-301725-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-301725 ssh -n ha-301725 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-301725 ssh -n ha-301725-m03 "sudo cat /home/docker/cp-test_ha-301725_ha-301725-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-301725 cp ha-301725:/home/docker/cp-test.txt ha-301725-m04:/home/docker/cp-test_ha-301725_ha-301725-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-301725 ssh -n ha-301725 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-301725 ssh -n ha-301725-m04 "sudo cat /home/docker/cp-test_ha-301725_ha-301725-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-301725 cp testdata/cp-test.txt ha-301725-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-301725 ssh -n ha-301725-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-301725 cp ha-301725-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2645352594/001/cp-test_ha-301725-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-301725 ssh -n ha-301725-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-301725 cp ha-301725-m02:/home/docker/cp-test.txt ha-301725:/home/docker/cp-test_ha-301725-m02_ha-301725.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-301725 ssh -n ha-301725-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-301725 ssh -n ha-301725 "sudo cat /home/docker/cp-test_ha-301725-m02_ha-301725.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-301725 cp ha-301725-m02:/home/docker/cp-test.txt ha-301725-m03:/home/docker/cp-test_ha-301725-m02_ha-301725-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-301725 ssh -n ha-301725-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-301725 ssh -n ha-301725-m03 "sudo cat /home/docker/cp-test_ha-301725-m02_ha-301725-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-301725 cp ha-301725-m02:/home/docker/cp-test.txt ha-301725-m04:/home/docker/cp-test_ha-301725-m02_ha-301725-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-301725 ssh -n ha-301725-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-301725 ssh -n ha-301725-m04 "sudo cat /home/docker/cp-test_ha-301725-m02_ha-301725-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-301725 cp testdata/cp-test.txt ha-301725-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-301725 ssh -n ha-301725-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-301725 cp ha-301725-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2645352594/001/cp-test_ha-301725-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-301725 ssh -n ha-301725-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-301725 cp ha-301725-m03:/home/docker/cp-test.txt ha-301725:/home/docker/cp-test_ha-301725-m03_ha-301725.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-301725 ssh -n ha-301725-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-301725 ssh -n ha-301725 "sudo cat /home/docker/cp-test_ha-301725-m03_ha-301725.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-301725 cp ha-301725-m03:/home/docker/cp-test.txt ha-301725-m02:/home/docker/cp-test_ha-301725-m03_ha-301725-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-301725 ssh -n ha-301725-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-301725 ssh -n ha-301725-m02 "sudo cat /home/docker/cp-test_ha-301725-m03_ha-301725-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-301725 cp ha-301725-m03:/home/docker/cp-test.txt ha-301725-m04:/home/docker/cp-test_ha-301725-m03_ha-301725-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-301725 ssh -n ha-301725-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-301725 ssh -n ha-301725-m04 "sudo cat /home/docker/cp-test_ha-301725-m03_ha-301725-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-301725 cp testdata/cp-test.txt ha-301725-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-301725 ssh -n ha-301725-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-301725 cp ha-301725-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2645352594/001/cp-test_ha-301725-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-301725 ssh -n ha-301725-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-301725 cp ha-301725-m04:/home/docker/cp-test.txt ha-301725:/home/docker/cp-test_ha-301725-m04_ha-301725.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-301725 ssh -n ha-301725-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-301725 ssh -n ha-301725 "sudo cat /home/docker/cp-test_ha-301725-m04_ha-301725.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-301725 cp ha-301725-m04:/home/docker/cp-test.txt ha-301725-m02:/home/docker/cp-test_ha-301725-m04_ha-301725-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-301725 ssh -n ha-301725-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-301725 ssh -n ha-301725-m02 "sudo cat /home/docker/cp-test_ha-301725-m04_ha-301725-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-301725 cp ha-301725-m04:/home/docker/cp-test.txt ha-301725-m03:/home/docker/cp-test_ha-301725-m04_ha-301725-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-301725 ssh -n ha-301725-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-301725 ssh -n ha-301725-m03 "sudo cat /home/docker/cp-test_ha-301725-m04_ha-301725-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (19.14s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (12.81s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-linux-arm64 -p ha-301725 node stop m02 -v=7 --alsologtostderr
ha_test.go:363: (dbg) Done: out/minikube-linux-arm64 -p ha-301725 node stop m02 -v=7 --alsologtostderr: (12.059593008s)
ha_test.go:369: (dbg) Run:  out/minikube-linux-arm64 -p ha-301725 status -v=7 --alsologtostderr
ha_test.go:369: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-301725 status -v=7 --alsologtostderr: exit status 7 (747.284199ms)

                                                
                                                
-- stdout --
	ha-301725
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-301725-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-301725-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-301725-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 11:48:33.775251  352219 out.go:345] Setting OutFile to fd 1 ...
	I0819 11:48:33.775444  352219 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 11:48:33.775471  352219 out.go:358] Setting ErrFile to fd 2...
	I0819 11:48:33.775490  352219 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 11:48:33.775839  352219 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19476-293809/.minikube/bin
	I0819 11:48:33.776073  352219 out.go:352] Setting JSON to false
	I0819 11:48:33.776135  352219 mustload.go:65] Loading cluster: ha-301725
	I0819 11:48:33.776578  352219 config.go:182] Loaded profile config "ha-301725": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.0
	I0819 11:48:33.776618  352219 status.go:255] checking status of ha-301725 ...
	I0819 11:48:33.777177  352219 cli_runner.go:164] Run: docker container inspect ha-301725 --format={{.State.Status}}
	I0819 11:48:33.777563  352219 notify.go:220] Checking for updates...
	I0819 11:48:33.796731  352219 status.go:330] ha-301725 host status = "Running" (err=<nil>)
	I0819 11:48:33.796756  352219 host.go:66] Checking if "ha-301725" exists ...
	I0819 11:48:33.797166  352219 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-301725
	I0819 11:48:33.816711  352219 host.go:66] Checking if "ha-301725" exists ...
	I0819 11:48:33.817044  352219 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0819 11:48:33.817122  352219 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-301725
	I0819 11:48:33.844867  352219 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33158 SSHKeyPath:/home/jenkins/minikube-integration/19476-293809/.minikube/machines/ha-301725/id_rsa Username:docker}
	I0819 11:48:33.940386  352219 ssh_runner.go:195] Run: systemctl --version
	I0819 11:48:33.945170  352219 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 11:48:33.957385  352219 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0819 11:48:34.033300  352219 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:55 OomKillDisable:true NGoroutines:71 SystemTime:2024-08-19 11:48:34.022659515 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1067-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214900736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:27.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8fc6bcff51318944179630522a095cc9dbf9f353 Expected:8fc6bcff51318944179630522a095cc9dbf9f353} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1]] Warnings:<nil>}}
	I0819 11:48:34.033970  352219 kubeconfig.go:125] found "ha-301725" server: "https://192.168.49.254:8443"
	I0819 11:48:34.034008  352219 api_server.go:166] Checking apiserver status ...
	I0819 11:48:34.034080  352219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 11:48:34.046838  352219 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1405/cgroup
	I0819 11:48:34.057007  352219 api_server.go:182] apiserver freezer: "5:freezer:/docker/ba0ccd83eb819bdbd34daf26841479c446256bfdec10847023fe4ec113222c8a/kubepods/burstable/pod692ff782f7a185e5809710773f632446/9f5c94d23fb8f6007ef44e769362063f4233363aab987743fdaade9deb7a991f"
	I0819 11:48:34.057084  352219 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/ba0ccd83eb819bdbd34daf26841479c446256bfdec10847023fe4ec113222c8a/kubepods/burstable/pod692ff782f7a185e5809710773f632446/9f5c94d23fb8f6007ef44e769362063f4233363aab987743fdaade9deb7a991f/freezer.state
	I0819 11:48:34.066839  352219 api_server.go:204] freezer state: "THAWED"
	I0819 11:48:34.066944  352219 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0819 11:48:34.075283  352219 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0819 11:48:34.075319  352219 status.go:422] ha-301725 apiserver status = Running (err=<nil>)
	I0819 11:48:34.075332  352219 status.go:257] ha-301725 status: &{Name:ha-301725 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0819 11:48:34.075349  352219 status.go:255] checking status of ha-301725-m02 ...
	I0819 11:48:34.075682  352219 cli_runner.go:164] Run: docker container inspect ha-301725-m02 --format={{.State.Status}}
	I0819 11:48:34.097348  352219 status.go:330] ha-301725-m02 host status = "Stopped" (err=<nil>)
	I0819 11:48:34.097373  352219 status.go:343] host is not running, skipping remaining checks
	I0819 11:48:34.097381  352219 status.go:257] ha-301725-m02 status: &{Name:ha-301725-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0819 11:48:34.097402  352219 status.go:255] checking status of ha-301725-m03 ...
	I0819 11:48:34.097835  352219 cli_runner.go:164] Run: docker container inspect ha-301725-m03 --format={{.State.Status}}
	I0819 11:48:34.115299  352219 status.go:330] ha-301725-m03 host status = "Running" (err=<nil>)
	I0819 11:48:34.115326  352219 host.go:66] Checking if "ha-301725-m03" exists ...
	I0819 11:48:34.115643  352219 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-301725-m03
	I0819 11:48:34.134338  352219 host.go:66] Checking if "ha-301725-m03" exists ...
	I0819 11:48:34.134724  352219 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0819 11:48:34.134781  352219 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-301725-m03
	I0819 11:48:34.152133  352219 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33168 SSHKeyPath:/home/jenkins/minikube-integration/19476-293809/.minikube/machines/ha-301725-m03/id_rsa Username:docker}
	I0819 11:48:34.244637  352219 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 11:48:34.257399  352219 kubeconfig.go:125] found "ha-301725" server: "https://192.168.49.254:8443"
	I0819 11:48:34.257437  352219 api_server.go:166] Checking apiserver status ...
	I0819 11:48:34.257481  352219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 11:48:34.269113  352219 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1379/cgroup
	I0819 11:48:34.279838  352219 api_server.go:182] apiserver freezer: "5:freezer:/docker/9df343d048d5707a187e4ac2ee5adba22a12ebc29713e3f271c89dcb0551265c/kubepods/burstable/podff44ccf198f7c333c9ea4cfb027d1ff0/b8650e1160315c3f0802e2d7e8ee4c2a6d7fbf430a9e7e4904ab747d41c52786"
	I0819 11:48:34.279925  352219 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/9df343d048d5707a187e4ac2ee5adba22a12ebc29713e3f271c89dcb0551265c/kubepods/burstable/podff44ccf198f7c333c9ea4cfb027d1ff0/b8650e1160315c3f0802e2d7e8ee4c2a6d7fbf430a9e7e4904ab747d41c52786/freezer.state
	I0819 11:48:34.289310  352219 api_server.go:204] freezer state: "THAWED"
	I0819 11:48:34.289337  352219 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0819 11:48:34.297749  352219 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0819 11:48:34.297790  352219 status.go:422] ha-301725-m03 apiserver status = Running (err=<nil>)
	I0819 11:48:34.297801  352219 status.go:257] ha-301725-m03 status: &{Name:ha-301725-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0819 11:48:34.297829  352219 status.go:255] checking status of ha-301725-m04 ...
	I0819 11:48:34.298204  352219 cli_runner.go:164] Run: docker container inspect ha-301725-m04 --format={{.State.Status}}
	I0819 11:48:34.317731  352219 status.go:330] ha-301725-m04 host status = "Running" (err=<nil>)
	I0819 11:48:34.317758  352219 host.go:66] Checking if "ha-301725-m04" exists ...
	I0819 11:48:34.318051  352219 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-301725-m04
	I0819 11:48:34.339105  352219 host.go:66] Checking if "ha-301725-m04" exists ...
	I0819 11:48:34.339426  352219 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0819 11:48:34.339469  352219 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-301725-m04
	I0819 11:48:34.362393  352219 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33173 SSHKeyPath:/home/jenkins/minikube-integration/19476-293809/.minikube/machines/ha-301725-m04/id_rsa Username:docker}
	I0819 11:48:34.452315  352219 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 11:48:34.465170  352219 status.go:257] ha-301725-m04 status: &{Name:ha-301725-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (12.81s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.57s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.57s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (28.59s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-linux-arm64 -p ha-301725 node start m02 -v=7 --alsologtostderr
ha_test.go:420: (dbg) Done: out/minikube-linux-arm64 -p ha-301725 node start m02 -v=7 --alsologtostderr: (27.363090596s)
ha_test.go:428: (dbg) Run:  out/minikube-linux-arm64 -p ha-301725 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Done: out/minikube-linux-arm64 -p ha-301725 status -v=7 --alsologtostderr: (1.119417368s)
ha_test.go:448: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (28.59s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.79s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.79s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (132.66s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-linux-arm64 node list -p ha-301725 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-linux-arm64 stop -p ha-301725 -v=7 --alsologtostderr
E0819 11:49:15.856150  299191 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19476-293809/.minikube/profiles/functional-970286/client.crt: no such file or directory" logger="UnhandledError"
E0819 11:49:15.862553  299191 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19476-293809/.minikube/profiles/functional-970286/client.crt: no such file or directory" logger="UnhandledError"
E0819 11:49:15.873971  299191 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19476-293809/.minikube/profiles/functional-970286/client.crt: no such file or directory" logger="UnhandledError"
E0819 11:49:15.895343  299191 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19476-293809/.minikube/profiles/functional-970286/client.crt: no such file or directory" logger="UnhandledError"
E0819 11:49:15.936780  299191 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19476-293809/.minikube/profiles/functional-970286/client.crt: no such file or directory" logger="UnhandledError"
E0819 11:49:16.024228  299191 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19476-293809/.minikube/profiles/functional-970286/client.crt: no such file or directory" logger="UnhandledError"
E0819 11:49:16.185896  299191 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19476-293809/.minikube/profiles/functional-970286/client.crt: no such file or directory" logger="UnhandledError"
E0819 11:49:16.507368  299191 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19476-293809/.minikube/profiles/functional-970286/client.crt: no such file or directory" logger="UnhandledError"
E0819 11:49:17.149448  299191 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19476-293809/.minikube/profiles/functional-970286/client.crt: no such file or directory" logger="UnhandledError"
E0819 11:49:18.431084  299191 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19476-293809/.minikube/profiles/functional-970286/client.crt: no such file or directory" logger="UnhandledError"
E0819 11:49:20.992463  299191 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19476-293809/.minikube/profiles/functional-970286/client.crt: no such file or directory" logger="UnhandledError"
E0819 11:49:26.114386  299191 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19476-293809/.minikube/profiles/functional-970286/client.crt: no such file or directory" logger="UnhandledError"
E0819 11:49:36.355769  299191 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19476-293809/.minikube/profiles/functional-970286/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:462: (dbg) Done: out/minikube-linux-arm64 stop -p ha-301725 -v=7 --alsologtostderr: (37.249161006s)
ha_test.go:467: (dbg) Run:  out/minikube-linux-arm64 start -p ha-301725 --wait=true -v=7 --alsologtostderr
E0819 11:49:56.837831  299191 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19476-293809/.minikube/profiles/functional-970286/client.crt: no such file or directory" logger="UnhandledError"
E0819 11:50:07.266018  299191 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19476-293809/.minikube/profiles/addons-288312/client.crt: no such file or directory" logger="UnhandledError"
E0819 11:50:34.969950  299191 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19476-293809/.minikube/profiles/addons-288312/client.crt: no such file or directory" logger="UnhandledError"
E0819 11:50:37.799577  299191 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19476-293809/.minikube/profiles/functional-970286/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:467: (dbg) Done: out/minikube-linux-arm64 start -p ha-301725 --wait=true -v=7 --alsologtostderr: (1m35.224485299s)
ha_test.go:472: (dbg) Run:  out/minikube-linux-arm64 node list -p ha-301725
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (132.66s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (11.17s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-linux-arm64 -p ha-301725 node delete m03 -v=7 --alsologtostderr
ha_test.go:487: (dbg) Done: out/minikube-linux-arm64 -p ha-301725 node delete m03 -v=7 --alsologtostderr: (10.233925419s)
ha_test.go:493: (dbg) Run:  out/minikube-linux-arm64 -p ha-301725 status -v=7 --alsologtostderr
ha_test.go:511: (dbg) Run:  kubectl get nodes
ha_test.go:519: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (11.17s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.57s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.57s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (36.31s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-linux-arm64 -p ha-301725 stop -v=7 --alsologtostderr
E0819 11:51:59.721017  299191 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19476-293809/.minikube/profiles/functional-970286/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:531: (dbg) Done: out/minikube-linux-arm64 -p ha-301725 stop -v=7 --alsologtostderr: (36.173597719s)
ha_test.go:537: (dbg) Run:  out/minikube-linux-arm64 -p ha-301725 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-301725 status -v=7 --alsologtostderr: exit status 7 (133.211509ms)

                                                
                                                
-- stdout --
	ha-301725
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-301725-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-301725-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 11:52:05.059288  366473 out.go:345] Setting OutFile to fd 1 ...
	I0819 11:52:05.059502  366473 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 11:52:05.059534  366473 out.go:358] Setting ErrFile to fd 2...
	I0819 11:52:05.059554  366473 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 11:52:05.059846  366473 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19476-293809/.minikube/bin
	I0819 11:52:05.060106  366473 out.go:352] Setting JSON to false
	I0819 11:52:05.060182  366473 mustload.go:65] Loading cluster: ha-301725
	I0819 11:52:05.060280  366473 notify.go:220] Checking for updates...
	I0819 11:52:05.060677  366473 config.go:182] Loaded profile config "ha-301725": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.0
	I0819 11:52:05.060713  366473 status.go:255] checking status of ha-301725 ...
	I0819 11:52:05.061243  366473 cli_runner.go:164] Run: docker container inspect ha-301725 --format={{.State.Status}}
	I0819 11:52:05.081254  366473 status.go:330] ha-301725 host status = "Stopped" (err=<nil>)
	I0819 11:52:05.081284  366473 status.go:343] host is not running, skipping remaining checks
	I0819 11:52:05.081292  366473 status.go:257] ha-301725 status: &{Name:ha-301725 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0819 11:52:05.081326  366473 status.go:255] checking status of ha-301725-m02 ...
	I0819 11:52:05.081654  366473 cli_runner.go:164] Run: docker container inspect ha-301725-m02 --format={{.State.Status}}
	I0819 11:52:05.106700  366473 status.go:330] ha-301725-m02 host status = "Stopped" (err=<nil>)
	I0819 11:52:05.106730  366473 status.go:343] host is not running, skipping remaining checks
	I0819 11:52:05.106737  366473 status.go:257] ha-301725-m02 status: &{Name:ha-301725-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0819 11:52:05.106758  366473 status.go:255] checking status of ha-301725-m04 ...
	I0819 11:52:05.107110  366473 cli_runner.go:164] Run: docker container inspect ha-301725-m04 --format={{.State.Status}}
	I0819 11:52:05.125740  366473 status.go:330] ha-301725-m04 host status = "Stopped" (err=<nil>)
	I0819 11:52:05.125762  366473 status.go:343] host is not running, skipping remaining checks
	I0819 11:52:05.125770  366473 status.go:257] ha-301725-m04 status: &{Name:ha-301725-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (36.31s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (76.89s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-linux-arm64 start -p ha-301725 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=containerd
ha_test.go:560: (dbg) Done: out/minikube-linux-arm64 start -p ha-301725 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=containerd: (1m15.933305862s)
ha_test.go:566: (dbg) Run:  out/minikube-linux-arm64 -p ha-301725 status -v=7 --alsologtostderr
ha_test.go:584: (dbg) Run:  kubectl get nodes
ha_test.go:592: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (76.89s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.55s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.55s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (39.34s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-linux-arm64 node add -p ha-301725 --control-plane -v=7 --alsologtostderr
ha_test.go:605: (dbg) Done: out/minikube-linux-arm64 node add -p ha-301725 --control-plane -v=7 --alsologtostderr: (38.279632417s)
ha_test.go:611: (dbg) Run:  out/minikube-linux-arm64 -p ha-301725 status -v=7 --alsologtostderr
ha_test.go:611: (dbg) Done: out/minikube-linux-arm64 -p ha-301725 status -v=7 --alsologtostderr: (1.061541328s)
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (39.34s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.8s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.80s)

                                                
                                    
x
+
TestJSONOutput/start/Command (51.32s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-133186 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=containerd
E0819 11:54:15.856323  299191 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19476-293809/.minikube/profiles/functional-970286/client.crt: no such file or directory" logger="UnhandledError"
E0819 11:54:43.562505  299191 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19476-293809/.minikube/profiles/functional-970286/client.crt: no such file or directory" logger="UnhandledError"
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 start -p json-output-133186 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=containerd: (51.315735992s)
--- PASS: TestJSONOutput/start/Command (51.32s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.77s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 pause -p json-output-133186 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.77s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.68s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 unpause -p json-output-133186 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.68s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.72s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 stop -p json-output-133186 --output=json --user=testUser
E0819 11:55:07.265777  299191 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19476-293809/.minikube/profiles/addons-288312/client.crt: no such file or directory" logger="UnhandledError"
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 stop -p json-output-133186 --output=json --user=testUser: (5.720770879s)
--- PASS: TestJSONOutput/stop/Command (5.72s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.21s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-error-672300 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p json-output-error-672300 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (70.633734ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"05233458-0766-4c9a-b91a-9efbf7beba1c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-672300] minikube v1.33.1 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"0c4f8afa-5150-4167-aace-f229efd8e854","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19476"}}
	{"specversion":"1.0","id":"11be17c3-049e-4e00-9f59-b2a188c489f8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"030ac131-0822-4b7a-a3da-0d1718a08e04","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19476-293809/kubeconfig"}}
	{"specversion":"1.0","id":"90ea6e53-5001-484e-a259-d315a04dc730","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19476-293809/.minikube"}}
	{"specversion":"1.0","id":"169b6d0d-3cc7-473a-bce6-9664872400f8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"8bb72fab-4b61-46fb-9bfa-ad264ccec5f9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"c9bfca96-b083-42ce-a0e5-41d57490eac7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-672300" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p json-output-error-672300
--- PASS: TestErrorJSONOutput (0.21s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (35.35s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-268949 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-268949 --network=: (33.211536175s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-268949" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-268949
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-268949: (2.119211748s)
--- PASS: TestKicCustomNetwork/create_custom_network (35.35s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (33.55s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-300377 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-300377 --network=bridge: (31.599660044s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-300377" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-300377
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-300377: (1.930782747s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (33.55s)

                                                
                                    
x
+
TestKicExistingNetwork (32.95s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-arm64 start -p existing-network-703228 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-arm64 start -p existing-network-703228 --network=existing-network: (30.875435477s)
helpers_test.go:175: Cleaning up "existing-network-703228" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p existing-network-703228
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p existing-network-703228: (1.929981662s)
--- PASS: TestKicExistingNetwork (32.95s)

                                                
                                    
x
+
TestKicCustomSubnet (36.23s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-subnet-440442 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-subnet-440442 --subnet=192.168.60.0/24: (34.102903112s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-440442 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-440442" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p custom-subnet-440442
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p custom-subnet-440442: (2.099096471s)
--- PASS: TestKicCustomSubnet (36.23s)

                                                
                                    
x
+
TestKicStaticIP (35.16s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-arm64 start -p static-ip-571913 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-arm64 start -p static-ip-571913 --static-ip=192.168.200.200: (32.937758582s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-arm64 -p static-ip-571913 ip
helpers_test.go:175: Cleaning up "static-ip-571913" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p static-ip-571913
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p static-ip-571913: (2.077806498s)
--- PASS: TestKicStaticIP (35.16s)

                                                
                                    
x
+
TestMainNoArgs (0.06s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-arm64
--- PASS: TestMainNoArgs (0.06s)

                                                
                                    
x
+
TestMinikubeProfile (68.59s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p first-466294 --driver=docker  --container-runtime=containerd
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p first-466294 --driver=docker  --container-runtime=containerd: (30.910049612s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p second-468994 --driver=docker  --container-runtime=containerd
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p second-468994 --driver=docker  --container-runtime=containerd: (32.152936558s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile first-466294
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile second-468994
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
helpers_test.go:175: Cleaning up "second-468994" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p second-468994
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p second-468994: (2.06750119s)
helpers_test.go:175: Cleaning up "first-466294" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p first-466294
E0819 11:59:15.856505  299191 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19476-293809/.minikube/profiles/functional-970286/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p first-466294: (2.190914375s)
--- PASS: TestMinikubeProfile (68.59s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (6.61s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-1-486817 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-1-486817 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd: (5.610122776s)
--- PASS: TestMountStart/serial/StartWithMountFirst (6.61s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.25s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-1-486817 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.25s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (6.47s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-499887 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-499887 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd: (5.46718657s)
--- PASS: TestMountStart/serial/StartWithMountSecond (6.47s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.25s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-499887 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.25s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.65s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p mount-start-1-486817 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p mount-start-1-486817 --alsologtostderr -v=5: (1.647979973s)
--- PASS: TestMountStart/serial/DeleteFirst (1.65s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-499887 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.26s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.2s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-arm64 stop -p mount-start-2-499887
mount_start_test.go:155: (dbg) Done: out/minikube-linux-arm64 stop -p mount-start-2-499887: (1.200113008s)
--- PASS: TestMountStart/serial/Stop (1.20s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (7.28s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-499887
mount_start_test.go:166: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-499887: (6.277019137s)
--- PASS: TestMountStart/serial/RestartStopped (7.28s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-499887 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.27s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (69.72s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-223943 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd
E0819 12:00:07.265180  299191 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19476-293809/.minikube/profiles/addons-288312/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:96: (dbg) Done: out/minikube-linux-arm64 start -p multinode-223943 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd: (1m9.175755246s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-arm64 -p multinode-223943 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (69.72s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (16.77s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-223943 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-223943 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-arm64 kubectl -p multinode-223943 -- rollout status deployment/busybox: (14.953308332s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-223943 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-223943 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-223943 -- exec busybox-7dff88458-hjpx2 -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-223943 -- exec busybox-7dff88458-z8nn6 -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-223943 -- exec busybox-7dff88458-hjpx2 -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-223943 -- exec busybox-7dff88458-z8nn6 -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-223943 -- exec busybox-7dff88458-hjpx2 -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-223943 -- exec busybox-7dff88458-z8nn6 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (16.77s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (1.01s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-223943 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-223943 -- exec busybox-7dff88458-hjpx2 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-223943 -- exec busybox-7dff88458-hjpx2 -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-223943 -- exec busybox-7dff88458-z8nn6 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-223943 -- exec busybox-7dff88458-z8nn6 -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (1.01s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (20s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-223943 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-arm64 node add -p multinode-223943 -v 3 --alsologtostderr: (19.316010486s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-arm64 -p multinode-223943 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (20.00s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-223943 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.09s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.39s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.39s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (10.07s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-arm64 -p multinode-223943 status --output json --alsologtostderr
E0819 12:01:30.331794  299191 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19476-293809/.minikube/profiles/addons-288312/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-223943 cp testdata/cp-test.txt multinode-223943:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-223943 ssh -n multinode-223943 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-223943 cp multinode-223943:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1550328520/001/cp-test_multinode-223943.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-223943 ssh -n multinode-223943 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-223943 cp multinode-223943:/home/docker/cp-test.txt multinode-223943-m02:/home/docker/cp-test_multinode-223943_multinode-223943-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-223943 ssh -n multinode-223943 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-223943 ssh -n multinode-223943-m02 "sudo cat /home/docker/cp-test_multinode-223943_multinode-223943-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-223943 cp multinode-223943:/home/docker/cp-test.txt multinode-223943-m03:/home/docker/cp-test_multinode-223943_multinode-223943-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-223943 ssh -n multinode-223943 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-223943 ssh -n multinode-223943-m03 "sudo cat /home/docker/cp-test_multinode-223943_multinode-223943-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-223943 cp testdata/cp-test.txt multinode-223943-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-223943 ssh -n multinode-223943-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-223943 cp multinode-223943-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1550328520/001/cp-test_multinode-223943-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-223943 ssh -n multinode-223943-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-223943 cp multinode-223943-m02:/home/docker/cp-test.txt multinode-223943:/home/docker/cp-test_multinode-223943-m02_multinode-223943.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-223943 ssh -n multinode-223943-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-223943 ssh -n multinode-223943 "sudo cat /home/docker/cp-test_multinode-223943-m02_multinode-223943.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-223943 cp multinode-223943-m02:/home/docker/cp-test.txt multinode-223943-m03:/home/docker/cp-test_multinode-223943-m02_multinode-223943-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-223943 ssh -n multinode-223943-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-223943 ssh -n multinode-223943-m03 "sudo cat /home/docker/cp-test_multinode-223943-m02_multinode-223943-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-223943 cp testdata/cp-test.txt multinode-223943-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-223943 ssh -n multinode-223943-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-223943 cp multinode-223943-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1550328520/001/cp-test_multinode-223943-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-223943 ssh -n multinode-223943-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-223943 cp multinode-223943-m03:/home/docker/cp-test.txt multinode-223943:/home/docker/cp-test_multinode-223943-m03_multinode-223943.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-223943 ssh -n multinode-223943-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-223943 ssh -n multinode-223943 "sudo cat /home/docker/cp-test_multinode-223943-m03_multinode-223943.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-223943 cp multinode-223943-m03:/home/docker/cp-test.txt multinode-223943-m02:/home/docker/cp-test_multinode-223943-m03_multinode-223943-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-223943 ssh -n multinode-223943-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-223943 ssh -n multinode-223943-m02 "sudo cat /home/docker/cp-test_multinode-223943-m03_multinode-223943-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (10.07s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.34s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-arm64 -p multinode-223943 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-arm64 -p multinode-223943 node stop m03: (1.229926321s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-arm64 -p multinode-223943 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-223943 status: exit status 7 (507.302397ms)

                                                
                                                
-- stdout --
	multinode-223943
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-223943-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-223943-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p multinode-223943 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-223943 status --alsologtostderr: exit status 7 (603.650851ms)

                                                
                                                
-- stdout --
	multinode-223943
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-223943-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-223943-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 12:01:42.101122  420023 out.go:345] Setting OutFile to fd 1 ...
	I0819 12:01:42.101339  420023 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 12:01:42.101371  420023 out.go:358] Setting ErrFile to fd 2...
	I0819 12:01:42.101395  420023 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 12:01:42.101731  420023 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19476-293809/.minikube/bin
	I0819 12:01:42.102014  420023 out.go:352] Setting JSON to false
	I0819 12:01:42.102104  420023 mustload.go:65] Loading cluster: multinode-223943
	I0819 12:01:42.102242  420023 notify.go:220] Checking for updates...
	I0819 12:01:42.102739  420023 config.go:182] Loaded profile config "multinode-223943": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.0
	I0819 12:01:42.102783  420023 status.go:255] checking status of multinode-223943 ...
	I0819 12:01:42.103465  420023 cli_runner.go:164] Run: docker container inspect multinode-223943 --format={{.State.Status}}
	I0819 12:01:42.149106  420023 status.go:330] multinode-223943 host status = "Running" (err=<nil>)
	I0819 12:01:42.149132  420023 host.go:66] Checking if "multinode-223943" exists ...
	I0819 12:01:42.149622  420023 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-223943
	I0819 12:01:42.182761  420023 host.go:66] Checking if "multinode-223943" exists ...
	I0819 12:01:42.183778  420023 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0819 12:01:42.183854  420023 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-223943
	I0819 12:01:42.216930  420023 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33278 SSHKeyPath:/home/jenkins/minikube-integration/19476-293809/.minikube/machines/multinode-223943/id_rsa Username:docker}
	I0819 12:01:42.320060  420023 ssh_runner.go:195] Run: systemctl --version
	I0819 12:01:42.325287  420023 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 12:01:42.339050  420023 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0819 12:01:42.408615  420023 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:41 OomKillDisable:true NGoroutines:61 SystemTime:2024-08-19 12:01:42.396491854 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1067-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214900736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:27.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8fc6bcff51318944179630522a095cc9dbf9f353 Expected:8fc6bcff51318944179630522a095cc9dbf9f353} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1]] Warnings:<nil>}}
	I0819 12:01:42.409229  420023 kubeconfig.go:125] found "multinode-223943" server: "https://192.168.67.2:8443"
	I0819 12:01:42.409270  420023 api_server.go:166] Checking apiserver status ...
	I0819 12:01:42.409318  420023 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 12:01:42.422559  420023 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1448/cgroup
	I0819 12:01:42.433760  420023 api_server.go:182] apiserver freezer: "5:freezer:/docker/7272a2ed935c7236b2d10867729ddce0f578b86b65667819fc2ddd50a064eac1/kubepods/burstable/pod25932f23df38565ad079d96735074f15/ba9e9c4527b2a9499e87a835d779229c12edb1fc83ee3e2fb9d2d9d92276c7ae"
	I0819 12:01:42.433839  420023 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/7272a2ed935c7236b2d10867729ddce0f578b86b65667819fc2ddd50a064eac1/kubepods/burstable/pod25932f23df38565ad079d96735074f15/ba9e9c4527b2a9499e87a835d779229c12edb1fc83ee3e2fb9d2d9d92276c7ae/freezer.state
	I0819 12:01:42.443466  420023 api_server.go:204] freezer state: "THAWED"
	I0819 12:01:42.443495  420023 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0819 12:01:42.451324  420023 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I0819 12:01:42.451353  420023 status.go:422] multinode-223943 apiserver status = Running (err=<nil>)
	I0819 12:01:42.451365  420023 status.go:257] multinode-223943 status: &{Name:multinode-223943 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0819 12:01:42.451411  420023 status.go:255] checking status of multinode-223943-m02 ...
	I0819 12:01:42.451751  420023 cli_runner.go:164] Run: docker container inspect multinode-223943-m02 --format={{.State.Status}}
	I0819 12:01:42.469067  420023 status.go:330] multinode-223943-m02 host status = "Running" (err=<nil>)
	I0819 12:01:42.469097  420023 host.go:66] Checking if "multinode-223943-m02" exists ...
	I0819 12:01:42.469405  420023 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-223943-m02
	I0819 12:01:42.485940  420023 host.go:66] Checking if "multinode-223943-m02" exists ...
	I0819 12:01:42.486265  420023 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0819 12:01:42.486312  420023 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-223943-m02
	I0819 12:01:42.503872  420023 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33283 SSHKeyPath:/home/jenkins/minikube-integration/19476-293809/.minikube/machines/multinode-223943-m02/id_rsa Username:docker}
	I0819 12:01:42.600196  420023 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 12:01:42.612917  420023 status.go:257] multinode-223943-m02 status: &{Name:multinode-223943-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0819 12:01:42.612954  420023 status.go:255] checking status of multinode-223943-m03 ...
	I0819 12:01:42.613312  420023 cli_runner.go:164] Run: docker container inspect multinode-223943-m03 --format={{.State.Status}}
	I0819 12:01:42.629455  420023 status.go:330] multinode-223943-m03 host status = "Stopped" (err=<nil>)
	I0819 12:01:42.629481  420023 status.go:343] host is not running, skipping remaining checks
	I0819 12:01:42.629489  420023 status.go:257] multinode-223943-m03 status: &{Name:multinode-223943-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.34s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (10.05s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-arm64 -p multinode-223943 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-arm64 -p multinode-223943 node start m03 -v=7 --alsologtostderr: (9.267461551s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-arm64 -p multinode-223943 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (10.05s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (94.74s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-223943
multinode_test.go:321: (dbg) Run:  out/minikube-linux-arm64 stop -p multinode-223943
multinode_test.go:321: (dbg) Done: out/minikube-linux-arm64 stop -p multinode-223943: (24.98452494s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-223943 --wait=true -v=8 --alsologtostderr
multinode_test.go:326: (dbg) Done: out/minikube-linux-arm64 start -p multinode-223943 --wait=true -v=8 --alsologtostderr: (1m9.626703038s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-223943
--- PASS: TestMultiNode/serial/RestartKeepsNodes (94.74s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.5s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-arm64 -p multinode-223943 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-arm64 -p multinode-223943 node delete m03: (4.821266664s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p multinode-223943 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.50s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (24.2s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-arm64 -p multinode-223943 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-arm64 -p multinode-223943 stop: (24.011655574s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-arm64 -p multinode-223943 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-223943 status: exit status 7 (91.289234ms)

                                                
                                                
-- stdout --
	multinode-223943
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-223943-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-arm64 -p multinode-223943 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-223943 status --alsologtostderr: exit status 7 (97.150521ms)

                                                
                                                
-- stdout --
	multinode-223943
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-223943-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 12:03:57.085216  428491 out.go:345] Setting OutFile to fd 1 ...
	I0819 12:03:57.085400  428491 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 12:03:57.085415  428491 out.go:358] Setting ErrFile to fd 2...
	I0819 12:03:57.085422  428491 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 12:03:57.085721  428491 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19476-293809/.minikube/bin
	I0819 12:03:57.085968  428491 out.go:352] Setting JSON to false
	I0819 12:03:57.086034  428491 mustload.go:65] Loading cluster: multinode-223943
	I0819 12:03:57.086163  428491 notify.go:220] Checking for updates...
	I0819 12:03:57.086554  428491 config.go:182] Loaded profile config "multinode-223943": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.0
	I0819 12:03:57.086576  428491 status.go:255] checking status of multinode-223943 ...
	I0819 12:03:57.087199  428491 cli_runner.go:164] Run: docker container inspect multinode-223943 --format={{.State.Status}}
	I0819 12:03:57.107187  428491 status.go:330] multinode-223943 host status = "Stopped" (err=<nil>)
	I0819 12:03:57.107216  428491 status.go:343] host is not running, skipping remaining checks
	I0819 12:03:57.107225  428491 status.go:257] multinode-223943 status: &{Name:multinode-223943 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0819 12:03:57.107249  428491 status.go:255] checking status of multinode-223943-m02 ...
	I0819 12:03:57.107578  428491 cli_runner.go:164] Run: docker container inspect multinode-223943-m02 --format={{.State.Status}}
	I0819 12:03:57.133327  428491 status.go:330] multinode-223943-m02 host status = "Stopped" (err=<nil>)
	I0819 12:03:57.133354  428491 status.go:343] host is not running, skipping remaining checks
	I0819 12:03:57.133363  428491 status.go:257] multinode-223943-m02 status: &{Name:multinode-223943-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (24.20s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (49.71s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-223943 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd
E0819 12:04:15.855933  299191 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19476-293809/.minikube/profiles/functional-970286/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:376: (dbg) Done: out/minikube-linux-arm64 start -p multinode-223943 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd: (49.049313056s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-arm64 -p multinode-223943 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (49.71s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (32.8s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-223943
multinode_test.go:464: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-223943-m02 --driver=docker  --container-runtime=containerd
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p multinode-223943-m02 --driver=docker  --container-runtime=containerd: exit status 14 (89.380525ms)

                                                
                                                
-- stdout --
	* [multinode-223943-m02] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19476
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19476-293809/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19476-293809/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-223943-m02' is duplicated with machine name 'multinode-223943-m02' in profile 'multinode-223943'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-223943-m03 --driver=docker  --container-runtime=containerd
E0819 12:05:07.265229  299191 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19476-293809/.minikube/profiles/addons-288312/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:472: (dbg) Done: out/minikube-linux-arm64 start -p multinode-223943-m03 --driver=docker  --container-runtime=containerd: (30.376617673s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-223943
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-arm64 node add -p multinode-223943: exit status 80 (311.93609ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-223943 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-223943-m03 already exists in multinode-223943-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-arm64 delete -p multinode-223943-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-arm64 delete -p multinode-223943-m03: (1.964580879s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (32.80s)

                                                
                                    
x
+
TestPreload (114.03s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-667135 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.24.4
E0819 12:05:38.924438  299191 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19476-293809/.minikube/profiles/functional-970286/client.crt: no such file or directory" logger="UnhandledError"
preload_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-667135 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.24.4: (1m18.101942697s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-667135 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-arm64 -p test-preload-667135 image pull gcr.io/k8s-minikube/busybox: (1.226737296s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-arm64 stop -p test-preload-667135
preload_test.go:58: (dbg) Done: out/minikube-linux-arm64 stop -p test-preload-667135: (12.030838198s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-667135 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=containerd
preload_test.go:66: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-667135 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=containerd: (19.892028198s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-667135 image list
helpers_test.go:175: Cleaning up "test-preload-667135" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p test-preload-667135
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p test-preload-667135: (2.395984285s)
--- PASS: TestPreload (114.03s)

                                                
                                    
x
+
TestScheduledStopUnix (108.56s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-arm64 start -p scheduled-stop-451825 --memory=2048 --driver=docker  --container-runtime=containerd
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-arm64 start -p scheduled-stop-451825 --memory=2048 --driver=docker  --container-runtime=containerd: (32.384972804s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-451825 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-arm64 status --format={{.TimeToStop}} -p scheduled-stop-451825 -n scheduled-stop-451825
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-451825 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-451825 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-451825 -n scheduled-stop-451825
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-451825
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-451825 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-451825
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p scheduled-stop-451825: exit status 7 (67.960562ms)

                                                
                                                
-- stdout --
	scheduled-stop-451825
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-451825 -n scheduled-stop-451825
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-451825 -n scheduled-stop-451825: exit status 7 (67.352236ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-451825" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p scheduled-stop-451825
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p scheduled-stop-451825: (4.595058364s)
--- PASS: TestScheduledStopUnix (108.56s)

                                                
                                    
x
+
TestInsufficientStorage (13.29s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-arm64 start -p insufficient-storage-518481 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=containerd
E0819 12:09:15.856191  299191 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19476-293809/.minikube/profiles/functional-970286/client.crt: no such file or directory" logger="UnhandledError"
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p insufficient-storage-518481 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=containerd: exit status 26 (10.838268143s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"5e8b4412-eec3-4cd1-888e-854aab1e6efc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-518481] minikube v1.33.1 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"c47e22c5-7864-444a-bfb5-d74b44c9c8ef","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19476"}}
	{"specversion":"1.0","id":"b5374832-4c96-42b9-b24f-109497cb75c0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"79be9536-360d-4952-9433-851007ce1c8a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19476-293809/kubeconfig"}}
	{"specversion":"1.0","id":"25b64910-b5e9-4e5c-8f86-5f8dbb17efd4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19476-293809/.minikube"}}
	{"specversion":"1.0","id":"d0e7c838-6d1f-4490-a0e5-c84bc87f85b6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"cbceaf17-750e-44cc-9d31-97cbc6702d4e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"b620785f-6f43-4e78-a210-066b452ac5b1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"449d667d-8fa3-45ab-a66e-3e0c221ec056","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"110a2335-696d-49c3-9472-a25b49c68216","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"cf6187d0-c4fb-4686-8bb2-7d83638d0c34","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"f84cebea-2a54-4c6e-a14c-52b41a6a7d26","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-518481\" primary control-plane node in \"insufficient-storage-518481\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"3f0ec827-c29c-472f-8e33-9e49ba1b84d5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.44-1723740748-19452 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"2aa59afd-2655-4d52-87d0-28607a691da1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"dc94811c-9948-441c-8e36-d4c208f72af5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-518481 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-518481 --output=json --layout=cluster: exit status 7 (283.325188ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-518481","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.33.1","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-518481","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0819 12:09:17.362300  447219 status.go:417] kubeconfig endpoint: get endpoint: "insufficient-storage-518481" does not appear in /home/jenkins/minikube-integration/19476-293809/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-518481 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-518481 --output=json --layout=cluster: exit status 7 (286.420758ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-518481","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.33.1","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-518481","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0819 12:09:17.648270  447283 status.go:417] kubeconfig endpoint: get endpoint: "insufficient-storage-518481" does not appear in /home/jenkins/minikube-integration/19476-293809/kubeconfig
	E0819 12:09:17.658525  447283 status.go:560] unable to read event log: stat: stat /home/jenkins/minikube-integration/19476-293809/.minikube/profiles/insufficient-storage-518481/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-518481" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p insufficient-storage-518481
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p insufficient-storage-518481: (1.878271636s)
--- PASS: TestInsufficientStorage (13.29s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (100.7s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.4040183382 start -p running-upgrade-371066 --memory=2200 --vm-driver=docker  --container-runtime=containerd
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.4040183382 start -p running-upgrade-371066 --memory=2200 --vm-driver=docker  --container-runtime=containerd: (49.103112164s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-arm64 start -p running-upgrade-371066 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-arm64 start -p running-upgrade-371066 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (47.87534107s)
helpers_test.go:175: Cleaning up "running-upgrade-371066" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p running-upgrade-371066
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p running-upgrade-371066: (3.052703378s)
--- PASS: TestRunningBinaryUpgrade (100.70s)

                                                
                                    
x
+
TestKubernetesUpgrade (107.18s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-414017 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-414017 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (57.148184407s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-arm64 stop -p kubernetes-upgrade-414017
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-arm64 stop -p kubernetes-upgrade-414017: (1.214798921s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-arm64 -p kubernetes-upgrade-414017 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-arm64 -p kubernetes-upgrade-414017 status --format={{.Host}}: exit status 7 (72.655239ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-414017 --memory=2200 --kubernetes-version=v1.31.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-414017 --memory=2200 --kubernetes-version=v1.31.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (38.151959943s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-414017 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-414017 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p kubernetes-upgrade-414017 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=containerd: exit status 106 (220.386058ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-414017] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19476
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19476-293809/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19476-293809/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.31.0 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-414017
	    minikube start -p kubernetes-upgrade-414017 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-4140172 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.31.0, by running:
	    
	    minikube start -p kubernetes-upgrade-414017 --kubernetes-version=v1.31.0
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-414017 --memory=2200 --kubernetes-version=v1.31.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-414017 --memory=2200 --kubernetes-version=v1.31.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (7.615045679s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-414017" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubernetes-upgrade-414017
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p kubernetes-upgrade-414017: (2.641253092s)
--- PASS: TestKubernetesUpgrade (107.18s)

                                                
                                    
x
+
TestMissingContainerUpgrade (177.33s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.26.0.3951830252 start -p missing-upgrade-741333 --memory=2200 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.26.0.3951830252 start -p missing-upgrade-741333 --memory=2200 --driver=docker  --container-runtime=containerd: (1m24.851703637s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-741333
version_upgrade_test.go:318: (dbg) Done: docker stop missing-upgrade-741333: (10.306451607s)
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-741333
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-arm64 start -p missing-upgrade-741333 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-arm64 start -p missing-upgrade-741333 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (1m18.113820642s)
helpers_test.go:175: Cleaning up "missing-upgrade-741333" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p missing-upgrade-741333
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p missing-upgrade-741333: (2.992870626s)
--- PASS: TestMissingContainerUpgrade (177.33s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-967263 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p NoKubernetes-967263 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=containerd: exit status 14 (77.175936ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-967263] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19476
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19476-293809/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19476-293809/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (42.01s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-967263 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-967263 --driver=docker  --container-runtime=containerd: (41.178732339s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-967263 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (42.01s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (17.85s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-967263 --no-kubernetes --driver=docker  --container-runtime=containerd
E0819 12:10:07.265086  299191 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19476-293809/.minikube/profiles/addons-288312/client.crt: no such file or directory" logger="UnhandledError"
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-967263 --no-kubernetes --driver=docker  --container-runtime=containerd: (15.403892455s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-967263 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-arm64 -p NoKubernetes-967263 status -o json: exit status 2 (298.649249ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-967263","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-arm64 delete -p NoKubernetes-967263
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-arm64 delete -p NoKubernetes-967263: (2.147402804s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (17.85s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (8.48s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-967263 --no-kubernetes --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-967263 --no-kubernetes --driver=docker  --container-runtime=containerd: (8.482614594s)
--- PASS: TestNoKubernetes/serial/Start (8.48s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.35s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-967263 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-967263 "sudo systemctl is-active --quiet service kubelet": exit status 1 (349.605174ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.35s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-arm64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-arm64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.27s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-arm64 stop -p NoKubernetes-967263
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-arm64 stop -p NoKubernetes-967263: (1.268710506s)
--- PASS: TestNoKubernetes/serial/Stop (1.27s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (7.7s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-967263 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-967263 --driver=docker  --container-runtime=containerd: (7.704667086s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (7.70s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.31s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-967263 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-967263 "sudo systemctl is-active --quiet service kubelet": exit status 1 (306.173185ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.31s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.84s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.84s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (121.66s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.4126713844 start -p stopped-upgrade-299479 --memory=2200 --vm-driver=docker  --container-runtime=containerd
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.4126713844 start -p stopped-upgrade-299479 --memory=2200 --vm-driver=docker  --container-runtime=containerd: (52.154487607s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.4126713844 -p stopped-upgrade-299479 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.4126713844 -p stopped-upgrade-299479 stop: (20.208864613s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-arm64 start -p stopped-upgrade-299479 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-arm64 start -p stopped-upgrade-299479 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (49.294040855s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (121.66s)

                                                
                                    
x
+
TestPause/serial/Start (61.4s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -p pause-847048 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=containerd
E0819 12:14:15.856229  299191 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19476-293809/.minikube/profiles/functional-970286/client.crt: no such file or directory" logger="UnhandledError"
pause_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -p pause-847048 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=containerd: (1m1.400194435s)
--- PASS: TestPause/serial/Start (61.40s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.99s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-arm64 logs -p stopped-upgrade-299479
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.99s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (4.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-arm64 start -p false-845840 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=containerd
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p false-845840 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=containerd: exit status 14 (177.829821ms)

                                                
                                                
-- stdout --
	* [false-845840] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19476
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19476-293809/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19476-293809/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 12:15:06.698391  482641 out.go:345] Setting OutFile to fd 1 ...
	I0819 12:15:06.698794  482641 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 12:15:06.698803  482641 out.go:358] Setting ErrFile to fd 2...
	I0819 12:15:06.698809  482641 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 12:15:06.699086  482641 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19476-293809/.minikube/bin
	I0819 12:15:06.699509  482641 out.go:352] Setting JSON to false
	I0819 12:15:06.700477  482641 start.go:129] hostinfo: {"hostname":"ip-172-31-30-239","uptime":7054,"bootTime":1724062653,"procs":242,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I0819 12:15:06.700537  482641 start.go:139] virtualization:  
	I0819 12:15:06.704202  482641 out.go:177] * [false-845840] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	I0819 12:15:06.707752  482641 out.go:177]   - MINIKUBE_LOCATION=19476
	I0819 12:15:06.707847  482641 notify.go:220] Checking for updates...
	I0819 12:15:06.713282  482641 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0819 12:15:06.716240  482641 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19476-293809/kubeconfig
	I0819 12:15:06.718767  482641 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19476-293809/.minikube
	I0819 12:15:06.721179  482641 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0819 12:15:06.723889  482641 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0819 12:15:06.727143  482641 config.go:182] Loaded profile config "pause-847048": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.0
	I0819 12:15:06.727238  482641 driver.go:392] Setting default libvirt URI to qemu:///system
	I0819 12:15:06.752436  482641 docker.go:123] docker version: linux-27.1.2:Docker Engine - Community
	I0819 12:15:06.752555  482641 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0819 12:15:06.811406  482641 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:34 OomKillDisable:true NGoroutines:53 SystemTime:2024-08-19 12:15:06.801695145 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1067-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214900736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:27.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8fc6bcff51318944179630522a095cc9dbf9f353 Expected:8fc6bcff51318944179630522a095cc9dbf9f353} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1]] Warnings:<nil>}}
	I0819 12:15:06.811531  482641 docker.go:307] overlay module found
	I0819 12:15:06.816061  482641 out.go:177] * Using the docker driver based on user configuration
	I0819 12:15:06.818746  482641 start.go:297] selected driver: docker
	I0819 12:15:06.818772  482641 start.go:901] validating driver "docker" against <nil>
	I0819 12:15:06.818787  482641 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0819 12:15:06.822214  482641 out.go:201] 
	W0819 12:15:06.824905  482641 out.go:270] X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	I0819 12:15:06.827370  482641 out.go:201] 

                                                
                                                
** /stderr **
E0819 12:15:07.265190  299191 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19476-293809/.minikube/profiles/addons-288312/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:88: 
----------------------- debugLogs start: false-845840 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-845840

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-845840

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-845840

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-845840

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-845840

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-845840

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-845840

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-845840

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-845840

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-845840

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-845840" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-845840"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-845840" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-845840"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-845840" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-845840"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-845840

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-845840" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-845840"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-845840" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-845840"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-845840" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-845840" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-845840" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-845840" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-845840" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-845840" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-845840" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-845840" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-845840" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-845840"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-845840" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-845840"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-845840" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-845840"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-845840" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-845840"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-845840" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-845840"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-845840" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-845840" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-845840" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-845840" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-845840"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-845840" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-845840"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-845840" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-845840"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-845840" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-845840"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-845840" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-845840"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/19476-293809/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 19 Aug 2024 12:14:53 UTC
provider: minikube.sigs.k8s.io
version: v1.33.1
name: cluster_info
server: https://192.168.76.2:8443
name: pause-847048
contexts:
- context:
cluster: pause-847048
extensions:
- extension:
last-update: Mon, 19 Aug 2024 12:14:53 UTC
provider: minikube.sigs.k8s.io
version: v1.33.1
name: context_info
namespace: default
user: pause-847048
name: pause-847048
current-context: ""
kind: Config
preferences: {}
users:
- name: pause-847048
user:
client-certificate: /home/jenkins/minikube-integration/19476-293809/.minikube/profiles/pause-847048/client.crt
client-key: /home/jenkins/minikube-integration/19476-293809/.minikube/profiles/pause-847048/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-845840

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-845840" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-845840"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-845840" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-845840"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-845840" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-845840"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-845840" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-845840"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-845840" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-845840"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-845840" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-845840"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-845840" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-845840"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-845840" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-845840"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-845840" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-845840"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-845840" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-845840"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-845840" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-845840"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-845840" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-845840"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-845840" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-845840"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-845840" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-845840"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-845840" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-845840"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-845840" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-845840"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-845840" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-845840"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-845840" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-845840"

                                                
                                                
----------------------- debugLogs end: false-845840 [took: 3.852747405s] --------------------------------
helpers_test.go:175: Cleaning up "false-845840" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p false-845840
--- PASS: TestNetworkPlugins/group/false (4.25s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (6.95s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-arm64 start -p pause-847048 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
pause_test.go:92: (dbg) Done: out/minikube-linux-arm64 start -p pause-847048 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (6.930760487s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (6.95s)

                                                
                                    
x
+
TestPause/serial/Pause (1.09s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-847048 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-linux-arm64 pause -p pause-847048 --alsologtostderr -v=5: (1.08363674s)
--- PASS: TestPause/serial/Pause (1.09s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.41s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p pause-847048 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p pause-847048 --output=json --layout=cluster: exit status 2 (407.885333ms)

                                                
                                                
-- stdout --
	{"Name":"pause-847048","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 7 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.33.1","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-847048","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.41s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.88s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-arm64 unpause -p pause-847048 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.88s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (1.2s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-847048 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-linux-arm64 pause -p pause-847048 --alsologtostderr -v=5: (1.201884953s)
--- PASS: TestPause/serial/PauseAgain (1.20s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (3.5s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p pause-847048 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p pause-847048 --alsologtostderr -v=5: (3.499913104s)
--- PASS: TestPause/serial/DeletePaused (3.50s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.38s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-847048
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-847048: exit status 1 (17.12163ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-847048: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (0.38s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (146.21s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-091610 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.20.0
E0819 12:18:10.333078  299191 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19476-293809/.minikube/profiles/addons-288312/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-091610 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.20.0: (2m26.209678182s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (146.21s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (8.72s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-091610 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [87557ac3-1af6-4d24-a32d-f4ec75d9e782] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [87557ac3-1af6-4d24-a32d-f4ec75d9e782] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 8.073389889s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-091610 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (8.72s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.95s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-091610 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-091610 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.789867548s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-091610 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.95s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (12.65s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p old-k8s-version-091610 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p old-k8s-version-091610 --alsologtostderr -v=3: (12.64654891s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (12.65s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (95.66s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-069465 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-069465 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.0: (1m35.662882557s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (95.66s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.49s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-091610 -n old-k8s-version-091610
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-091610 -n old-k8s-version-091610: exit status 7 (185.585914ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p old-k8s-version-091610 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.49s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (8.49s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-069465 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [91808f6e-9e71-4f3d-a693-f6fed499278d] Pending
helpers_test.go:344: "busybox" [91808f6e-9e71-4f3d-a693-f6fed499278d] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [91808f6e-9e71-4f3d-a693-f6fed499278d] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 8.044497538s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-069465 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (8.49s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.25s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p no-preload-069465 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p no-preload-069465 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.115157555s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-069465 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.25s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (12.1s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p no-preload-069465 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p no-preload-069465 --alsologtostderr -v=3: (12.10466757s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (12.10s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-069465 -n no-preload-069465
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-069465 -n no-preload-069465: exit status 7 (70.664628ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p no-preload-069465 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (266.84s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-069465 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.0
E0819 12:22:18.926767  299191 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19476-293809/.minikube/profiles/functional-970286/client.crt: no such file or directory" logger="UnhandledError"
E0819 12:24:15.856401  299191 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19476-293809/.minikube/profiles/functional-970286/client.crt: no such file or directory" logger="UnhandledError"
E0819 12:25:07.265201  299191 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19476-293809/.minikube/profiles/addons-288312/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-069465 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.0: (4m26.355685952s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-069465 -n no-preload-069465
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (266.84s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-vl2zf" [0e461cab-8684-486e-8a95-a0259cb0525a] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004467515s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-zxwv7" [dd87ccdf-e21e-4c86-9da0-eb0d00f03b3a] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004807789s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.11s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-vl2zf" [0e461cab-8684-486e-8a95-a0259cb0525a] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.0038648s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-091610 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.11s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.16s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-zxwv7" [dd87ccdf-e21e-4c86-9da0-eb0d00f03b3a] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004076025s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-069465 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.16s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.37s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-091610 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240813-c6f155d6
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240202-8f1494ea
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.37s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.38s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-069465 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240813-c6f155d6
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.38s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (3.99s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p old-k8s-version-091610 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-arm64 pause -p old-k8s-version-091610 --alsologtostderr -v=1: (1.09142234s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-091610 -n old-k8s-version-091610
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-091610 -n old-k8s-version-091610: exit status 2 (399.96264ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-091610 -n old-k8s-version-091610
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-091610 -n old-k8s-version-091610: exit status 2 (434.365435ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p old-k8s-version-091610 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-091610 -n old-k8s-version-091610
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-091610 -n old-k8s-version-091610
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (3.99s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (4.37s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p no-preload-069465 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-arm64 pause -p no-preload-069465 --alsologtostderr -v=1: (1.110576173s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-069465 -n no-preload-069465
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-069465 -n no-preload-069465: exit status 2 (420.480004ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-069465 -n no-preload-069465
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-069465 -n no-preload-069465: exit status 2 (434.634304ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p no-preload-069465 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-arm64 unpause -p no-preload-069465 --alsologtostderr -v=1: (1.059505022s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-069465 -n no-preload-069465
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-069465 -n no-preload-069465
--- PASS: TestStartStop/group/no-preload/serial/Pause (4.37s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (64.38s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-599583 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-599583 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.0: (1m4.378256521s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (64.38s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (62.26s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-689675 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-689675 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.0: (1m2.256142455s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (62.26s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.4s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-689675 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [51545c47-c1f7-4eee-bcf6-9fd17b172cd4] Pending
helpers_test.go:344: "busybox" [51545c47-c1f7-4eee-bcf6-9fd17b172cd4] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [51545c47-c1f7-4eee-bcf6-9fd17b172cd4] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 8.004262236s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-689675 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.40s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (9.5s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-599583 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [1b18aaf5-127f-4300-a081-035e14797742] Pending
helpers_test.go:344: "busybox" [1b18aaf5-127f-4300-a081-035e14797742] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [1b18aaf5-127f-4300-a081-035e14797742] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 9.014477362s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-599583 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (9.50s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.16s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-689675 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-689675 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.017683245s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-689675 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.16s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (12.26s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p default-k8s-diff-port-689675 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p default-k8s-diff-port-689675 --alsologtostderr -v=3: (12.260113071s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (12.26s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.17s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-599583 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-599583 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.052523438s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-599583 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.17s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (12.03s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p embed-certs-599583 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p embed-certs-599583 --alsologtostderr -v=3: (12.029863534s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (12.03s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-689675 -n default-k8s-diff-port-689675
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-689675 -n default-k8s-diff-port-689675: exit status 7 (63.593065ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p default-k8s-diff-port-689675 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.17s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (292.79s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-689675 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.0
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-689675 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.0: (4m52.286259175s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-689675 -n default-k8s-diff-port-689675
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (292.79s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-599583 -n embed-certs-599583
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-599583 -n embed-certs-599583: exit status 7 (70.592844ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p embed-certs-599583 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (272.26s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-599583 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.0
E0819 12:29:01.457950  299191 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19476-293809/.minikube/profiles/old-k8s-version-091610/client.crt: no such file or directory" logger="UnhandledError"
E0819 12:29:01.464711  299191 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19476-293809/.minikube/profiles/old-k8s-version-091610/client.crt: no such file or directory" logger="UnhandledError"
E0819 12:29:01.476191  299191 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19476-293809/.minikube/profiles/old-k8s-version-091610/client.crt: no such file or directory" logger="UnhandledError"
E0819 12:29:01.497649  299191 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19476-293809/.minikube/profiles/old-k8s-version-091610/client.crt: no such file or directory" logger="UnhandledError"
E0819 12:29:01.539061  299191 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19476-293809/.minikube/profiles/old-k8s-version-091610/client.crt: no such file or directory" logger="UnhandledError"
E0819 12:29:01.620586  299191 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19476-293809/.minikube/profiles/old-k8s-version-091610/client.crt: no such file or directory" logger="UnhandledError"
E0819 12:29:01.782084  299191 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19476-293809/.minikube/profiles/old-k8s-version-091610/client.crt: no such file or directory" logger="UnhandledError"
E0819 12:29:02.103730  299191 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19476-293809/.minikube/profiles/old-k8s-version-091610/client.crt: no such file or directory" logger="UnhandledError"
E0819 12:29:02.745990  299191 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19476-293809/.minikube/profiles/old-k8s-version-091610/client.crt: no such file or directory" logger="UnhandledError"
E0819 12:29:04.027904  299191 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19476-293809/.minikube/profiles/old-k8s-version-091610/client.crt: no such file or directory" logger="UnhandledError"
E0819 12:29:06.589869  299191 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19476-293809/.minikube/profiles/old-k8s-version-091610/client.crt: no such file or directory" logger="UnhandledError"
E0819 12:29:11.712094  299191 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19476-293809/.minikube/profiles/old-k8s-version-091610/client.crt: no such file or directory" logger="UnhandledError"
E0819 12:29:15.856140  299191 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19476-293809/.minikube/profiles/functional-970286/client.crt: no such file or directory" logger="UnhandledError"
E0819 12:29:21.953842  299191 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19476-293809/.minikube/profiles/old-k8s-version-091610/client.crt: no such file or directory" logger="UnhandledError"
E0819 12:29:42.435732  299191 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19476-293809/.minikube/profiles/old-k8s-version-091610/client.crt: no such file or directory" logger="UnhandledError"
E0819 12:30:07.265989  299191 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19476-293809/.minikube/profiles/addons-288312/client.crt: no such file or directory" logger="UnhandledError"
E0819 12:30:23.397142  299191 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19476-293809/.minikube/profiles/old-k8s-version-091610/client.crt: no such file or directory" logger="UnhandledError"
E0819 12:30:52.267169  299191 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19476-293809/.minikube/profiles/no-preload-069465/client.crt: no such file or directory" logger="UnhandledError"
E0819 12:30:52.273965  299191 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19476-293809/.minikube/profiles/no-preload-069465/client.crt: no such file or directory" logger="UnhandledError"
E0819 12:30:52.285842  299191 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19476-293809/.minikube/profiles/no-preload-069465/client.crt: no such file or directory" logger="UnhandledError"
E0819 12:30:52.307267  299191 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19476-293809/.minikube/profiles/no-preload-069465/client.crt: no such file or directory" logger="UnhandledError"
E0819 12:30:52.348660  299191 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19476-293809/.minikube/profiles/no-preload-069465/client.crt: no such file or directory" logger="UnhandledError"
E0819 12:30:52.430123  299191 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19476-293809/.minikube/profiles/no-preload-069465/client.crt: no such file or directory" logger="UnhandledError"
E0819 12:30:52.591891  299191 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19476-293809/.minikube/profiles/no-preload-069465/client.crt: no such file or directory" logger="UnhandledError"
E0819 12:30:52.913653  299191 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19476-293809/.minikube/profiles/no-preload-069465/client.crt: no such file or directory" logger="UnhandledError"
E0819 12:30:53.555492  299191 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19476-293809/.minikube/profiles/no-preload-069465/client.crt: no such file or directory" logger="UnhandledError"
E0819 12:30:54.836886  299191 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19476-293809/.minikube/profiles/no-preload-069465/client.crt: no such file or directory" logger="UnhandledError"
E0819 12:30:57.398804  299191 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19476-293809/.minikube/profiles/no-preload-069465/client.crt: no such file or directory" logger="UnhandledError"
E0819 12:31:02.520188  299191 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19476-293809/.minikube/profiles/no-preload-069465/client.crt: no such file or directory" logger="UnhandledError"
E0819 12:31:12.762485  299191 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19476-293809/.minikube/profiles/no-preload-069465/client.crt: no such file or directory" logger="UnhandledError"
E0819 12:31:33.244332  299191 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19476-293809/.minikube/profiles/no-preload-069465/client.crt: no such file or directory" logger="UnhandledError"
E0819 12:31:45.319772  299191 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19476-293809/.minikube/profiles/old-k8s-version-091610/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-599583 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.0: (4m31.544285716s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-599583 -n embed-certs-599583
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (272.26s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-mh4bj" [6b48cf16-85b1-4b62-b9fe-7aec77b269d4] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004712494s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-mh4bj" [6b48cf16-85b1-4b62-b9fe-7aec77b269d4] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004635671s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-599583 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.27s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-599583 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240813-c6f155d6
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240730-75a5af0c
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.27s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (3.17s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p embed-certs-599583 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-599583 -n embed-certs-599583
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-599583 -n embed-certs-599583: exit status 2 (320.1036ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-599583 -n embed-certs-599583
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-599583 -n embed-certs-599583: exit status 2 (347.255036ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p embed-certs-599583 --alsologtostderr -v=1
E0819 12:32:14.206390  299191 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19476-293809/.minikube/profiles/no-preload-069465/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-599583 -n embed-certs-599583
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-599583 -n embed-certs-599583
--- PASS: TestStartStop/group/embed-certs/serial/Pause (3.17s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (39.04s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-880537 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-880537 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.0: (39.037695398s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (39.04s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-9g6zq" [019971a8-ba15-43b2-9884-676803d126fe] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004820277s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.12s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-9g6zq" [019971a8-ba15-43b2-9884-676803d126fe] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004256789s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-689675 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.12s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.33s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-689675 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240813-c6f155d6
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240730-75a5af0c
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.33s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (4.06s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p default-k8s-diff-port-689675 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-arm64 pause -p default-k8s-diff-port-689675 --alsologtostderr -v=1: (1.087253531s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-689675 -n default-k8s-diff-port-689675
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-689675 -n default-k8s-diff-port-689675: exit status 2 (421.788674ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-689675 -n default-k8s-diff-port-689675
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-689675 -n default-k8s-diff-port-689675: exit status 2 (460.283476ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p default-k8s-diff-port-689675 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-689675 -n default-k8s-diff-port-689675
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-689675 -n default-k8s-diff-port-689675
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (4.06s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (66.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p auto-845840 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p auto-845840 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=containerd: (1m6.358375358s)
--- PASS: TestNetworkPlugins/group/auto/Start (66.36s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.82s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-880537 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-880537 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.816882255s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.82s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (1.39s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p newest-cni-880537 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p newest-cni-880537 --alsologtostderr -v=3: (1.38588867s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (1.39s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-880537 -n newest-cni-880537
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-880537 -n newest-cni-880537: exit status 7 (90.392229ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p newest-cni-880537 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.22s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (21.85s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-880537 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.0
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-880537 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.0: (21.431176058s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-880537 -n newest-cni-880537
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (21.85s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-880537 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240730-75a5af0c
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240813-c6f155d6
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.26s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (4.12s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p newest-cni-880537 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-880537 -n newest-cni-880537
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-880537 -n newest-cni-880537: exit status 2 (413.062596ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-880537 -n newest-cni-880537
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-880537 -n newest-cni-880537: exit status 2 (524.569901ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p newest-cni-880537 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-arm64 unpause -p newest-cni-880537 --alsologtostderr -v=1: (1.145408437s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-880537 -n newest-cni-880537
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-880537 -n newest-cni-880537
--- PASS: TestStartStop/group/newest-cni/serial/Pause (4.12s)
E0819 12:38:26.055232  299191 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19476-293809/.minikube/profiles/default-k8s-diff-port-689675/client.crt: no such file or directory" logger="UnhandledError"

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (52.55s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p kindnet-845840 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=containerd
E0819 12:33:36.128502  299191 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19476-293809/.minikube/profiles/no-preload-069465/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p kindnet-845840 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=containerd: (52.550181464s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (52.55s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.44s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p auto-845840 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.44s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (12.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-845840 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-sft26" [d2ea3fff-06f9-417a-b9fa-9cbec797f08b] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-sft26" [d2ea3fff-06f9-417a-b9fa-9cbec797f08b] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 12.004046867s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (12.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-845840 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-845840 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-845840 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (70.68s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p calico-845840 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p calico-845840 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=containerd: (1m10.675883959s)
--- PASS: TestNetworkPlugins/group/calico/Start (70.68s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-7tm9v" [60900c7c-6695-4970-aafb-2188e2d4862a] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.004386373s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p kindnet-845840 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (10.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-845840 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-vcsfl" [d518e132-0019-4888-9ee9-a948f7ea64a0] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0819 12:34:29.162021  299191 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19476-293809/.minikube/profiles/old-k8s-version-091610/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "netcat-6fc964789b-vcsfl" [d518e132-0019-4888-9ee9-a948f7ea64a0] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 10.005228667s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (10.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-845840 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-845840 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-845840 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (58.62s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-flannel-845840 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=containerd
E0819 12:35:07.265625  299191 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19476-293809/.minikube/profiles/addons-288312/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-flannel-845840 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=containerd: (58.624080705s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (58.62s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-jm4s6" [dc9384e9-f696-4a37-90ba-d49cdcb9c7be] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.005332634s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p calico-845840 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (11.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-845840 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-4k2bp" [d61f9c54-5dfc-459f-bc96-27f30a60fc37] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-4k2bp" [d61f9c54-5dfc-459f-bc96-27f30a60fc37] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 11.004458048s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (11.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-845840 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-845840 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-845840 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p custom-flannel-845840 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (10.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-845840 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-8zvgn" [b3239124-128d-4573-b57a-89d2ab422122] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-8zvgn" [b3239124-128d-4573-b57a-89d2ab422122] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 10.004198948s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (10.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (50.82s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p enable-default-cni-845840 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p enable-default-cni-845840 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=containerd: (50.822935347s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (50.82s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-845840 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-845840 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-845840 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (54.67s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p flannel-845840 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p flannel-845840 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=containerd: (54.671373669s)
--- PASS: TestNetworkPlugins/group/flannel/Start (54.67s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p enable-default-cni-845840 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-845840 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-52s5z" [0a848359-0d48-4e31-b369-ad44a5608901] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0819 12:37:04.092616  299191 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19476-293809/.minikube/profiles/default-k8s-diff-port-689675/client.crt: no such file or directory" logger="UnhandledError"
E0819 12:37:04.098963  299191 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19476-293809/.minikube/profiles/default-k8s-diff-port-689675/client.crt: no such file or directory" logger="UnhandledError"
E0819 12:37:04.110366  299191 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19476-293809/.minikube/profiles/default-k8s-diff-port-689675/client.crt: no such file or directory" logger="UnhandledError"
E0819 12:37:04.131712  299191 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19476-293809/.minikube/profiles/default-k8s-diff-port-689675/client.crt: no such file or directory" logger="UnhandledError"
E0819 12:37:04.173115  299191 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19476-293809/.minikube/profiles/default-k8s-diff-port-689675/client.crt: no such file or directory" logger="UnhandledError"
E0819 12:37:04.254610  299191 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19476-293809/.minikube/profiles/default-k8s-diff-port-689675/client.crt: no such file or directory" logger="UnhandledError"
E0819 12:37:04.416295  299191 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19476-293809/.minikube/profiles/default-k8s-diff-port-689675/client.crt: no such file or directory" logger="UnhandledError"
E0819 12:37:04.738273  299191 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19476-293809/.minikube/profiles/default-k8s-diff-port-689675/client.crt: no such file or directory" logger="UnhandledError"
E0819 12:37:05.380127  299191 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19476-293809/.minikube/profiles/default-k8s-diff-port-689675/client.crt: no such file or directory" logger="UnhandledError"
E0819 12:37:06.661721  299191 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19476-293809/.minikube/profiles/default-k8s-diff-port-689675/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "netcat-6fc964789b-52s5z" [0a848359-0d48-4e31-b369-ad44a5608901] Running
E0819 12:37:09.223675  299191 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19476-293809/.minikube/profiles/default-k8s-diff-port-689675/client.crt: no such file or directory" logger="UnhandledError"
E0819 12:37:14.345571  299191 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19476-293809/.minikube/profiles/default-k8s-diff-port-689675/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 11.004825553s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-845840 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-845840 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-845840 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-hkdkw" [0cdc5b40-b9b6-4310-ace4-17f47d9861dc] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.005084912s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (50.63s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p bridge-845840 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p bridge-845840 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=containerd: (50.63281289s)
--- PASS: TestNetworkPlugins/group/bridge/Start (50.63s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p flannel-845840 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (11.57s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-845840 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-pqvzt" [e314a5b6-1715-45ba-b329-11514509ff2a] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0819 12:37:45.084814  299191 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19476-293809/.minikube/profiles/default-k8s-diff-port-689675/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "netcat-6fc964789b-pqvzt" [e314a5b6-1715-45ba-b329-11514509ff2a] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 11.003694367s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (11.57s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-845840 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-845840 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-845840 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p bridge-845840 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (9.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-845840 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-kgh76" [2414e0cb-8bc1-4aa9-aaf5-e9952a352467] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-kgh76" [2414e0cb-8bc1-4aa9-aaf5-e9952a352467] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 9.004655538s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (9.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-845840 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-845840 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-845840 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.15s)

                                                
                                    

Test skip (28/328)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.31.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.54s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:232: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p download-docker-406447 --alsologtostderr --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:244: Skip for arm64 platform. See https://github.com/kubernetes/minikube/issues/10144
helpers_test.go:175: Cleaning up "download-docker-406447" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p download-docker-406447
--- SKIP: TestDownloadOnlyKic (0.54s)

                                                
                                    
x
+
TestOffline (0s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:35: skipping TestOffline - only docker runtime supported on arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestOffline (0.00s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (0s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:446: skip Helm test on arm64
--- SKIP: TestAddons/parallel/HelmTiller (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:500: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing containerd
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:45: Skip if arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1787: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:463: only validate docker env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:550: only validate podman env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing containerd container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-395650" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p disable-driver-mounts-395650
--- SKIP: TestStartStop/group/disable-driver-mounts (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3.59s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as containerd container runtimes requires CNI
panic.go:626: 
----------------------- debugLogs start: kubenet-845840 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-845840

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-845840

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-845840

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-845840

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-845840

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-845840

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-845840

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-845840

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-845840

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-845840

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-845840" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-845840"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-845840" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-845840"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-845840" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-845840"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-845840

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-845840" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-845840"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-845840" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-845840"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-845840" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-845840" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-845840" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-845840" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-845840" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-845840" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-845840" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-845840" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-845840" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-845840"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-845840" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-845840"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-845840" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-845840"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-845840" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-845840"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-845840" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-845840"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-845840" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-845840" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-845840" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-845840" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-845840"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-845840" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-845840"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-845840" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-845840"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-845840" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-845840"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-845840" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-845840"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/19476-293809/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 19 Aug 2024 12:14:53 UTC
provider: minikube.sigs.k8s.io
version: v1.33.1
name: cluster_info
server: https://192.168.76.2:8443
name: pause-847048
contexts:
- context:
cluster: pause-847048
extensions:
- extension:
last-update: Mon, 19 Aug 2024 12:14:53 UTC
provider: minikube.sigs.k8s.io
version: v1.33.1
name: context_info
namespace: default
user: pause-847048
name: pause-847048
current-context: ""
kind: Config
preferences: {}
users:
- name: pause-847048
user:
client-certificate: /home/jenkins/minikube-integration/19476-293809/.minikube/profiles/pause-847048/client.crt
client-key: /home/jenkins/minikube-integration/19476-293809/.minikube/profiles/pause-847048/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-845840

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-845840" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-845840"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-845840" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-845840"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-845840" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-845840"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-845840" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-845840"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-845840" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-845840"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-845840" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-845840"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-845840" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-845840"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-845840" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-845840"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-845840" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-845840"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-845840" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-845840"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-845840" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-845840"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-845840" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-845840"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-845840" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-845840"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-845840" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-845840"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-845840" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-845840"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-845840" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-845840"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-845840" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-845840"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-845840" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-845840"

                                                
                                                
----------------------- debugLogs end: kubenet-845840 [took: 3.433628419s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-845840" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubenet-845840
--- SKIP: TestNetworkPlugins/group/kubenet (3.59s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (4.97s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:626: 
----------------------- debugLogs start: cilium-845840 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-845840

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-845840

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-845840

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-845840

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-845840

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-845840

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-845840

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-845840

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-845840

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-845840

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-845840" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-845840"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-845840" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-845840"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-845840" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-845840"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-845840

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-845840" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-845840"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-845840" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-845840"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-845840" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-845840" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-845840" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-845840" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-845840" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-845840" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-845840" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-845840" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-845840" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-845840"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-845840" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-845840"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-845840" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-845840"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-845840" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-845840"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-845840" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-845840"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-845840

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-845840

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-845840" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-845840" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-845840

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-845840

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-845840" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-845840" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-845840" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-845840" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-845840" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-845840" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-845840"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-845840" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-845840"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-845840" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-845840"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-845840" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-845840"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-845840" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-845840"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/19476-293809/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 19 Aug 2024 12:14:53 UTC
provider: minikube.sigs.k8s.io
version: v1.33.1
name: cluster_info
server: https://192.168.76.2:8443
name: pause-847048
contexts:
- context:
cluster: pause-847048
extensions:
- extension:
last-update: Mon, 19 Aug 2024 12:14:53 UTC
provider: minikube.sigs.k8s.io
version: v1.33.1
name: context_info
namespace: default
user: pause-847048
name: pause-847048
current-context: ""
kind: Config
preferences: {}
users:
- name: pause-847048
user:
client-certificate: /home/jenkins/minikube-integration/19476-293809/.minikube/profiles/pause-847048/client.crt
client-key: /home/jenkins/minikube-integration/19476-293809/.minikube/profiles/pause-847048/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-845840

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-845840" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-845840"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-845840" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-845840"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-845840" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-845840"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-845840" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-845840"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-845840" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-845840"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-845840" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-845840"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-845840" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-845840"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-845840" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-845840"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-845840" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-845840"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-845840" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-845840"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-845840" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-845840"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-845840" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-845840"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-845840" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-845840"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-845840" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-845840"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-845840" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-845840"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-845840" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-845840"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-845840" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-845840"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-845840" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-845840"

                                                
                                                
----------------------- debugLogs end: cilium-845840 [took: 4.809731617s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-845840" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cilium-845840
--- SKIP: TestNetworkPlugins/group/cilium (4.97s)

                                                
                                    
Copied to clipboard