Test Report: Docker_Linux_containerd_arm64 19461

                    
                      ee4f5fb2e73abafca70b3598ab7977372efc25a8:2024-08-16:35814
                    
                

Test fail (2/328)

Order failed test Duration
29 TestAddons/serial/Volcano 199.84
302 TestStartStop/group/old-k8s-version/serial/SecondStart 375.02
x
+
TestAddons/serial/Volcano (199.84s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:897: volcano-scheduler stabilized in 51.384472ms
addons_test.go:913: volcano-controller stabilized in 51.448021ms
addons_test.go:905: volcano-admission stabilized in 51.488431ms
addons_test.go:919: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-scheduler-576bc46687-jz6d7" [ea029345-e710-4825-8a86-b6c0f9cb46e0] Running
addons_test.go:919: (dbg) TestAddons/serial/Volcano: app=volcano-scheduler healthy within 6.003921634s
addons_test.go:923: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-admission-77d7d48b68-2f46h" [46c1ba7a-40de-43a6-ab4d-8c4404c2d657] Running
addons_test.go:923: (dbg) TestAddons/serial/Volcano: app=volcano-admission healthy within 5.003448601s
addons_test.go:927: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-controllers-56675bb4d5-kdtv4" [ed91c8d5-bb6a-41df-9bdc-9fb0822860d5] Running
addons_test.go:927: (dbg) TestAddons/serial/Volcano: app=volcano-controller healthy within 5.003388105s
addons_test.go:932: (dbg) Run:  kubectl --context addons-864899 delete -n volcano-system job volcano-admission-init
addons_test.go:938: (dbg) Run:  kubectl --context addons-864899 create -f testdata/vcjob.yaml
addons_test.go:946: (dbg) Run:  kubectl --context addons-864899 get vcjob -n my-volcano
addons_test.go:964: (dbg) TestAddons/serial/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:344: "test-job-nginx-0" [c5d24db8-3cb9-478b-820b-39f6f4f0001f] Pending: PodScheduled:Unschedulable (0/1 nodes are unavailable: 1 Insufficient cpu.)
helpers_test.go:329: TestAddons/serial/Volcano: WARNING: pod list for "my-volcano" "volcano.sh/job-name=test-job" returned: client rate limiter Wait returned an error: context deadline exceeded
addons_test.go:964: ***** TestAddons/serial/Volcano: pod "volcano.sh/job-name=test-job" failed to start within 3m0s: context deadline exceeded ****
addons_test.go:964: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-864899 -n addons-864899
addons_test.go:964: TestAddons/serial/Volcano: showing logs for failed pods as of 2024-08-16 17:49:30.838252215 +0000 UTC m=+435.259701610
addons_test.go:964: (dbg) Run:  kubectl --context addons-864899 describe po test-job-nginx-0 -n my-volcano
addons_test.go:964: (dbg) kubectl --context addons-864899 describe po test-job-nginx-0 -n my-volcano:
Name:             test-job-nginx-0
Namespace:        my-volcano
Priority:         0
Service Account:  default
Node:             <none>
Labels:           volcano.sh/job-name=test-job
volcano.sh/job-namespace=my-volcano
volcano.sh/queue-name=test
volcano.sh/task-index=0
volcano.sh/task-spec=nginx
Annotations:      scheduling.k8s.io/group-name: test-job-b7d0f83f-e438-4786-b6cc-3b628c489a28
volcano.sh/job-name: test-job
volcano.sh/job-version: 0
volcano.sh/queue-name: test
volcano.sh/task-index: 0
volcano.sh/task-spec: nginx
volcano.sh/template-uid: test-job-nginx
Status:           Pending
IP:               
IPs:              <none>
Controlled By:    Job/test-job
Containers:
nginx:
Image:      nginx:latest
Port:       <none>
Host Port:  <none>
Command:
sleep
10m
Limits:
cpu:  1
Requests:
cpu:  1
Environment:
GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
PROJECT_ID:                      this_is_fake
GCP_PROJECT:                     this_is_fake
GCLOUD_PROJECT:                  this_is_fake
GOOGLE_CLOUD_PROJECT:            this_is_fake
CLOUDSDK_CORE_PROJECT:           this_is_fake
Mounts:
/google-app-creds.json from gcp-creds (ro)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-jwbs2 (ro)
Conditions:
Type           Status
PodScheduled   False 
Volumes:
kube-api-access-jwbs2:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
ConfigMapOptional:       <nil>
DownwardAPI:             true
gcp-creds:
Type:          HostPath (bare host directory volume)
Path:          /var/lib/minikube/google_application_credentials.json
HostPathType:  File
QoS Class:         Burstable
Node-Selectors:    <none>
Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason            Age    From     Message
----     ------            ----   ----     -------
Warning  FailedScheduling  2m59s  volcano  0/1 nodes are unavailable: 1 Insufficient cpu.
addons_test.go:964: (dbg) Run:  kubectl --context addons-864899 logs test-job-nginx-0 -n my-volcano
addons_test.go:964: (dbg) kubectl --context addons-864899 logs test-job-nginx-0 -n my-volcano:
addons_test.go:965: failed waiting for test-local-path pod: volcano.sh/job-name=test-job within 3m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/serial/Volcano]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-864899
helpers_test.go:235: (dbg) docker inspect addons-864899:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "8b98e662b4abf10d6c84b39c3d5cba0b2b9dfa26678fa7659c49f3a412c593d1",
	        "Created": "2024-08-16T17:43:00.698888511Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 294622,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-08-16T17:43:00.833425764Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:decdd59746a9dba10062a73f6cd4b910c7b4e60613660b1022f8357747681c4d",
	        "ResolvConfPath": "/var/lib/docker/containers/8b98e662b4abf10d6c84b39c3d5cba0b2b9dfa26678fa7659c49f3a412c593d1/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/8b98e662b4abf10d6c84b39c3d5cba0b2b9dfa26678fa7659c49f3a412c593d1/hostname",
	        "HostsPath": "/var/lib/docker/containers/8b98e662b4abf10d6c84b39c3d5cba0b2b9dfa26678fa7659c49f3a412c593d1/hosts",
	        "LogPath": "/var/lib/docker/containers/8b98e662b4abf10d6c84b39c3d5cba0b2b9dfa26678fa7659c49f3a412c593d1/8b98e662b4abf10d6c84b39c3d5cba0b2b9dfa26678fa7659c49f3a412c593d1-json.log",
	        "Name": "/addons-864899",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-864899:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-864899",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/e5e575b9a0f0ae530ccd3d10aa660e19acdcf6cf5a099a5c652b607ad15344e2-init/diff:/var/lib/docker/overlay2/6d9ca87c64683da0141fe1f37bb6088cb89212b329dea26763f56ee455e7f801/diff",
	                "MergedDir": "/var/lib/docker/overlay2/e5e575b9a0f0ae530ccd3d10aa660e19acdcf6cf5a099a5c652b607ad15344e2/merged",
	                "UpperDir": "/var/lib/docker/overlay2/e5e575b9a0f0ae530ccd3d10aa660e19acdcf6cf5a099a5c652b607ad15344e2/diff",
	                "WorkDir": "/var/lib/docker/overlay2/e5e575b9a0f0ae530ccd3d10aa660e19acdcf6cf5a099a5c652b607ad15344e2/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "addons-864899",
	                "Source": "/var/lib/docker/volumes/addons-864899/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-864899",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-864899",
	                "name.minikube.sigs.k8s.io": "addons-864899",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "dc4246ded2294fa241800b1823023d6bf7ac166943f6cc8065d09f0a70048a89",
	            "SandboxKey": "/var/run/docker/netns/dc4246ded229",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33140"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33141"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33144"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33142"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33143"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-864899": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null,
	                    "NetworkID": "0c0db5072b449bb2448037002039fc2e60a9a7fba094a1ed44533b8777782133",
	                    "EndpointID": "2f42ed6a15bc15bb308aa35c0ee5f9b3350fd1549d35b6af0f8333994b369642",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-864899",
	                        "8b98e662b4ab"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-864899 -n addons-864899
helpers_test.go:244: <<< TestAddons/serial/Volcano FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/serial/Volcano]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p addons-864899 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p addons-864899 logs -n 25: (1.571636041s)
helpers_test.go:252: TestAddons/serial/Volcano logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                 |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only              | download-only-778826   | jenkins | v1.33.1 | 16 Aug 24 17:42 UTC |                     |
	|         | -p download-only-778826              |                        |         |         |                     |                     |
	|         | --force --alsologtostderr            |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0         |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	| delete  | --all                                | minikube               | jenkins | v1.33.1 | 16 Aug 24 17:42 UTC | 16 Aug 24 17:42 UTC |
	| delete  | -p download-only-778826              | download-only-778826   | jenkins | v1.33.1 | 16 Aug 24 17:42 UTC | 16 Aug 24 17:42 UTC |
	| start   | -o=json --download-only              | download-only-528021   | jenkins | v1.33.1 | 16 Aug 24 17:42 UTC |                     |
	|         | -p download-only-528021              |                        |         |         |                     |                     |
	|         | --force --alsologtostderr            |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0         |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	| delete  | --all                                | minikube               | jenkins | v1.33.1 | 16 Aug 24 17:42 UTC | 16 Aug 24 17:42 UTC |
	| delete  | -p download-only-528021              | download-only-528021   | jenkins | v1.33.1 | 16 Aug 24 17:42 UTC | 16 Aug 24 17:42 UTC |
	| delete  | -p download-only-778826              | download-only-778826   | jenkins | v1.33.1 | 16 Aug 24 17:42 UTC | 16 Aug 24 17:42 UTC |
	| delete  | -p download-only-528021              | download-only-528021   | jenkins | v1.33.1 | 16 Aug 24 17:42 UTC | 16 Aug 24 17:42 UTC |
	| start   | --download-only -p                   | download-docker-792554 | jenkins | v1.33.1 | 16 Aug 24 17:42 UTC |                     |
	|         | download-docker-792554               |                        |         |         |                     |                     |
	|         | --alsologtostderr                    |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	| delete  | -p download-docker-792554            | download-docker-792554 | jenkins | v1.33.1 | 16 Aug 24 17:42 UTC | 16 Aug 24 17:42 UTC |
	| start   | --download-only -p                   | binary-mirror-779675   | jenkins | v1.33.1 | 16 Aug 24 17:42 UTC |                     |
	|         | binary-mirror-779675                 |                        |         |         |                     |                     |
	|         | --alsologtostderr                    |                        |         |         |                     |                     |
	|         | --binary-mirror                      |                        |         |         |                     |                     |
	|         | http://127.0.0.1:46441               |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	| delete  | -p binary-mirror-779675              | binary-mirror-779675   | jenkins | v1.33.1 | 16 Aug 24 17:42 UTC | 16 Aug 24 17:42 UTC |
	| addons  | disable dashboard -p                 | addons-864899          | jenkins | v1.33.1 | 16 Aug 24 17:42 UTC |                     |
	|         | addons-864899                        |                        |         |         |                     |                     |
	| addons  | enable dashboard -p                  | addons-864899          | jenkins | v1.33.1 | 16 Aug 24 17:42 UTC |                     |
	|         | addons-864899                        |                        |         |         |                     |                     |
	| start   | -p addons-864899 --wait=true         | addons-864899          | jenkins | v1.33.1 | 16 Aug 24 17:42 UTC | 16 Aug 24 17:46 UTC |
	|         | --memory=4000 --alsologtostderr      |                        |         |         |                     |                     |
	|         | --addons=registry                    |                        |         |         |                     |                     |
	|         | --addons=metrics-server              |                        |         |         |                     |                     |
	|         | --addons=volumesnapshots             |                        |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver         |                        |         |         |                     |                     |
	|         | --addons=gcp-auth                    |                        |         |         |                     |                     |
	|         | --addons=cloud-spanner               |                        |         |         |                     |                     |
	|         | --addons=inspektor-gadget            |                        |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher |                        |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin        |                        |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano       |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	|         | --addons=ingress                     |                        |         |         |                     |                     |
	|         | --addons=ingress-dns                 |                        |         |         |                     |                     |
	|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/16 17:42:36
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.22.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0816 17:42:36.595958  294136 out.go:345] Setting OutFile to fd 1 ...
	I0816 17:42:36.596114  294136 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 17:42:36.596125  294136 out.go:358] Setting ErrFile to fd 2...
	I0816 17:42:36.596130  294136 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 17:42:36.596381  294136 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19461-287979/.minikube/bin
	I0816 17:42:36.596842  294136 out.go:352] Setting JSON to false
	I0816 17:42:36.597806  294136 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":5086,"bootTime":1723825070,"procs":173,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0816 17:42:36.597888  294136 start.go:139] virtualization:  
	I0816 17:42:36.600407  294136 out.go:177] * [addons-864899] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	I0816 17:42:36.602362  294136 out.go:177]   - MINIKUBE_LOCATION=19461
	I0816 17:42:36.602472  294136 notify.go:220] Checking for updates...
	I0816 17:42:36.605799  294136 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0816 17:42:36.607617  294136 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19461-287979/kubeconfig
	I0816 17:42:36.609691  294136 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19461-287979/.minikube
	I0816 17:42:36.611328  294136 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0816 17:42:36.613017  294136 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0816 17:42:36.615354  294136 driver.go:392] Setting default libvirt URI to qemu:///system
	I0816 17:42:36.640221  294136 docker.go:123] docker version: linux-27.1.2:Docker Engine - Community
	I0816 17:42:36.640334  294136 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0816 17:42:36.696849  294136 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:25 OomKillDisable:true NGoroutines:44 SystemTime:2024-08-16 17:42:36.687404536 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1067-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214900736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8fc6bcff51318944179630522a095cc9dbf9f353 Expected:8fc6bcff51318944179630522a095cc9dbf9f353} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1]] Warnings:<nil>}}
	I0816 17:42:36.696958  294136 docker.go:307] overlay module found
	I0816 17:42:36.699401  294136 out.go:177] * Using the docker driver based on user configuration
	I0816 17:42:36.700982  294136 start.go:297] selected driver: docker
	I0816 17:42:36.701002  294136 start.go:901] validating driver "docker" against <nil>
	I0816 17:42:36.701026  294136 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0816 17:42:36.701692  294136 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0816 17:42:36.755152  294136 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:25 OomKillDisable:true NGoroutines:44 SystemTime:2024-08-16 17:42:36.745637626 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1067-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214900736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8fc6bcff51318944179630522a095cc9dbf9f353 Expected:8fc6bcff51318944179630522a095cc9dbf9f353} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1]] Warnings:<nil>}}
	I0816 17:42:36.755317  294136 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0816 17:42:36.755549  294136 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0816 17:42:36.757750  294136 out.go:177] * Using Docker driver with root privileges
	I0816 17:42:36.759791  294136 cni.go:84] Creating CNI manager for ""
	I0816 17:42:36.759817  294136 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0816 17:42:36.759840  294136 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0816 17:42:36.759933  294136 start.go:340] cluster config:
	{Name:addons-864899 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:addons-864899 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHA
uthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0816 17:42:36.762027  294136 out.go:177] * Starting "addons-864899" primary control-plane node in "addons-864899" cluster
	I0816 17:42:36.764061  294136 cache.go:121] Beginning downloading kic base image for docker with containerd
	I0816 17:42:36.766414  294136 out.go:177] * Pulling base image v0.0.44-1723740748-19452 ...
	I0816 17:42:36.768389  294136 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime containerd
	I0816 17:42:36.768434  294136 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d in local docker daemon
	I0816 17:42:36.768446  294136 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19461-287979/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-containerd-overlay2-arm64.tar.lz4
	I0816 17:42:36.768457  294136 cache.go:56] Caching tarball of preloaded images
	I0816 17:42:36.768554  294136 preload.go:172] Found /home/jenkins/minikube-integration/19461-287979/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I0816 17:42:36.768565  294136 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on containerd
	I0816 17:42:36.768907  294136 profile.go:143] Saving config to /home/jenkins/minikube-integration/19461-287979/.minikube/profiles/addons-864899/config.json ...
	I0816 17:42:36.768969  294136 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19461-287979/.minikube/profiles/addons-864899/config.json: {Name:mk0242e859bbbe181b21be9134ca1e5ab1d8feae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 17:42:36.784029  294136 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d to local cache
	I0816 17:42:36.784132  294136 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d in local cache directory
	I0816 17:42:36.784156  294136 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d in local cache directory, skipping pull
	I0816 17:42:36.784163  294136 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d exists in cache, skipping pull
	I0816 17:42:36.784177  294136 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d as a tarball
	I0816 17:42:36.784183  294136 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d from local cache
	I0816 17:42:53.685212  294136 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d from cached tarball
	I0816 17:42:53.685256  294136 cache.go:194] Successfully downloaded all kic artifacts
	I0816 17:42:53.685299  294136 start.go:360] acquireMachinesLock for addons-864899: {Name:mk7f96bc04aac37db43e0cae118b2e5114ac4f15 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 17:42:53.685416  294136 start.go:364] duration metric: took 93.817µs to acquireMachinesLock for "addons-864899"
	I0816 17:42:53.685447  294136 start.go:93] Provisioning new machine with config: &{Name:addons-864899 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:addons-864899 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:fa
lse CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0816 17:42:53.685539  294136 start.go:125] createHost starting for "" (driver="docker")
	I0816 17:42:53.688325  294136 out.go:235] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I0816 17:42:53.688586  294136 start.go:159] libmachine.API.Create for "addons-864899" (driver="docker")
	I0816 17:42:53.688623  294136 client.go:168] LocalClient.Create starting
	I0816 17:42:53.688749  294136 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19461-287979/.minikube/certs/ca.pem
	I0816 17:42:54.495578  294136 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19461-287979/.minikube/certs/cert.pem
	I0816 17:42:55.048432  294136 cli_runner.go:164] Run: docker network inspect addons-864899 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0816 17:42:55.063727  294136 cli_runner.go:211] docker network inspect addons-864899 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0816 17:42:55.063808  294136 network_create.go:284] running [docker network inspect addons-864899] to gather additional debugging logs...
	I0816 17:42:55.063831  294136 cli_runner.go:164] Run: docker network inspect addons-864899
	W0816 17:42:55.079400  294136 cli_runner.go:211] docker network inspect addons-864899 returned with exit code 1
	I0816 17:42:55.079431  294136 network_create.go:287] error running [docker network inspect addons-864899]: docker network inspect addons-864899: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-864899 not found
	I0816 17:42:55.079446  294136 network_create.go:289] output of [docker network inspect addons-864899]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-864899 not found
	
	** /stderr **
	I0816 17:42:55.079573  294136 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0816 17:42:55.096783  294136 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40000fe820}
	I0816 17:42:55.096831  294136 network_create.go:124] attempt to create docker network addons-864899 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0816 17:42:55.096891  294136 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-864899 addons-864899
	I0816 17:42:55.163087  294136 network_create.go:108] docker network addons-864899 192.168.49.0/24 created
	I0816 17:42:55.163128  294136 kic.go:121] calculated static IP "192.168.49.2" for the "addons-864899" container
	I0816 17:42:55.163207  294136 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0816 17:42:55.178502  294136 cli_runner.go:164] Run: docker volume create addons-864899 --label name.minikube.sigs.k8s.io=addons-864899 --label created_by.minikube.sigs.k8s.io=true
	I0816 17:42:55.194839  294136 oci.go:103] Successfully created a docker volume addons-864899
	I0816 17:42:55.194937  294136 cli_runner.go:164] Run: docker run --rm --name addons-864899-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-864899 --entrypoint /usr/bin/test -v addons-864899:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d -d /var/lib
	I0816 17:42:56.518109  294136 cli_runner.go:217] Completed: docker run --rm --name addons-864899-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-864899 --entrypoint /usr/bin/test -v addons-864899:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d -d /var/lib: (1.323116207s)
	I0816 17:42:56.518143  294136 oci.go:107] Successfully prepared a docker volume addons-864899
	I0816 17:42:56.518164  294136 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime containerd
	I0816 17:42:56.518185  294136 kic.go:194] Starting extracting preloaded images to volume ...
	I0816 17:42:56.518292  294136 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19461-287979/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v addons-864899:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d -I lz4 -xf /preloaded.tar -C /extractDir
	I0816 17:43:00.631122  294136 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19461-287979/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v addons-864899:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d -I lz4 -xf /preloaded.tar -C /extractDir: (4.112780233s)
	I0816 17:43:00.631156  294136 kic.go:203] duration metric: took 4.112967735s to extract preloaded images to volume ...
	W0816 17:43:00.631291  294136 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0816 17:43:00.631426  294136 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0816 17:43:00.682386  294136 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-864899 --name addons-864899 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-864899 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-864899 --network addons-864899 --ip 192.168.49.2 --volume addons-864899:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d
	I0816 17:43:00.994890  294136 cli_runner.go:164] Run: docker container inspect addons-864899 --format={{.State.Running}}
	I0816 17:43:01.027976  294136 cli_runner.go:164] Run: docker container inspect addons-864899 --format={{.State.Status}}
	I0816 17:43:01.049111  294136 cli_runner.go:164] Run: docker exec addons-864899 stat /var/lib/dpkg/alternatives/iptables
	I0816 17:43:01.098661  294136 oci.go:144] the created container "addons-864899" has a running status.
	I0816 17:43:01.098692  294136 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/19461-287979/.minikube/machines/addons-864899/id_rsa...
	I0816 17:43:01.481816  294136 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/19461-287979/.minikube/machines/addons-864899/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0816 17:43:01.510125  294136 cli_runner.go:164] Run: docker container inspect addons-864899 --format={{.State.Status}}
	I0816 17:43:01.539094  294136 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0816 17:43:01.539119  294136 kic_runner.go:114] Args: [docker exec --privileged addons-864899 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0816 17:43:01.627172  294136 cli_runner.go:164] Run: docker container inspect addons-864899 --format={{.State.Status}}
	I0816 17:43:01.650284  294136 machine.go:93] provisionDockerMachine start ...
	I0816 17:43:01.650379  294136 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-864899
	I0816 17:43:01.675730  294136 main.go:141] libmachine: Using SSH client type: native
	I0816 17:43:01.676010  294136 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e49d0] 0x3e7230 <nil>  [] 0s} 127.0.0.1 33140 <nil> <nil>}
	I0816 17:43:01.676028  294136 main.go:141] libmachine: About to run SSH command:
	hostname
	I0816 17:43:01.676579  294136 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:55136->127.0.0.1:33140: read: connection reset by peer
	I0816 17:43:04.808360  294136 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-864899
	
	I0816 17:43:04.808386  294136 ubuntu.go:169] provisioning hostname "addons-864899"
	I0816 17:43:04.808453  294136 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-864899
	I0816 17:43:04.824864  294136 main.go:141] libmachine: Using SSH client type: native
	I0816 17:43:04.825255  294136 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e49d0] 0x3e7230 <nil>  [] 0s} 127.0.0.1 33140 <nil> <nil>}
	I0816 17:43:04.825276  294136 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-864899 && echo "addons-864899" | sudo tee /etc/hostname
	I0816 17:43:04.968536  294136 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-864899
	
	I0816 17:43:04.968613  294136 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-864899
	I0816 17:43:04.985125  294136 main.go:141] libmachine: Using SSH client type: native
	I0816 17:43:04.985367  294136 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e49d0] 0x3e7230 <nil>  [] 0s} 127.0.0.1 33140 <nil> <nil>}
	I0816 17:43:04.985386  294136 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-864899' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-864899/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-864899' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0816 17:43:05.117064  294136 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0816 17:43:05.117108  294136 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19461-287979/.minikube CaCertPath:/home/jenkins/minikube-integration/19461-287979/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19461-287979/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19461-287979/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19461-287979/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19461-287979/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19461-287979/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19461-287979/.minikube}
	I0816 17:43:05.117136  294136 ubuntu.go:177] setting up certificates
	I0816 17:43:05.117145  294136 provision.go:84] configureAuth start
	I0816 17:43:05.117210  294136 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-864899
	I0816 17:43:05.134037  294136 provision.go:143] copyHostCerts
	I0816 17:43:05.134130  294136 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19461-287979/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19461-287979/.minikube/key.pem (1679 bytes)
	I0816 17:43:05.134287  294136 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19461-287979/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19461-287979/.minikube/ca.pem (1078 bytes)
	I0816 17:43:05.134347  294136 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19461-287979/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19461-287979/.minikube/cert.pem (1123 bytes)
	I0816 17:43:05.134400  294136 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19461-287979/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19461-287979/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19461-287979/.minikube/certs/ca-key.pem org=jenkins.addons-864899 san=[127.0.0.1 192.168.49.2 addons-864899 localhost minikube]
	I0816 17:43:05.386869  294136 provision.go:177] copyRemoteCerts
	I0816 17:43:05.386963  294136 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0816 17:43:05.387007  294136 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-864899
	I0816 17:43:05.404835  294136 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33140 SSHKeyPath:/home/jenkins/minikube-integration/19461-287979/.minikube/machines/addons-864899/id_rsa Username:docker}
	I0816 17:43:05.497539  294136 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-287979/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0816 17:43:05.522022  294136 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-287979/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0816 17:43:05.545267  294136 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-287979/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0816 17:43:05.569478  294136 provision.go:87] duration metric: took 452.315515ms to configureAuth
	I0816 17:43:05.569557  294136 ubuntu.go:193] setting minikube options for container-runtime
	I0816 17:43:05.569780  294136 config.go:182] Loaded profile config "addons-864899": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.0
	I0816 17:43:05.569798  294136 machine.go:96] duration metric: took 3.919493937s to provisionDockerMachine
	I0816 17:43:05.569806  294136 client.go:171] duration metric: took 11.88117312s to LocalClient.Create
	I0816 17:43:05.569828  294136 start.go:167] duration metric: took 11.881240723s to libmachine.API.Create "addons-864899"
	I0816 17:43:05.569836  294136 start.go:293] postStartSetup for "addons-864899" (driver="docker")
	I0816 17:43:05.569845  294136 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0816 17:43:05.569900  294136 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0816 17:43:05.569957  294136 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-864899
	I0816 17:43:05.587494  294136 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33140 SSHKeyPath:/home/jenkins/minikube-integration/19461-287979/.minikube/machines/addons-864899/id_rsa Username:docker}
	I0816 17:43:05.683310  294136 ssh_runner.go:195] Run: cat /etc/os-release
	I0816 17:43:05.686637  294136 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0816 17:43:05.686672  294136 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0816 17:43:05.686687  294136 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0816 17:43:05.686699  294136 info.go:137] Remote host: Ubuntu 22.04.4 LTS
	I0816 17:43:05.686710  294136 filesync.go:126] Scanning /home/jenkins/minikube-integration/19461-287979/.minikube/addons for local assets ...
	I0816 17:43:05.686782  294136 filesync.go:126] Scanning /home/jenkins/minikube-integration/19461-287979/.minikube/files for local assets ...
	I0816 17:43:05.686808  294136 start.go:296] duration metric: took 116.966798ms for postStartSetup
	I0816 17:43:05.687124  294136 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-864899
	I0816 17:43:05.704038  294136 profile.go:143] Saving config to /home/jenkins/minikube-integration/19461-287979/.minikube/profiles/addons-864899/config.json ...
	I0816 17:43:05.704332  294136 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0816 17:43:05.704384  294136 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-864899
	I0816 17:43:05.721433  294136 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33140 SSHKeyPath:/home/jenkins/minikube-integration/19461-287979/.minikube/machines/addons-864899/id_rsa Username:docker}
	I0816 17:43:05.810052  294136 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0816 17:43:05.814344  294136 start.go:128] duration metric: took 12.128788943s to createHost
	I0816 17:43:05.814365  294136 start.go:83] releasing machines lock for "addons-864899", held for 12.128935305s
	I0816 17:43:05.814436  294136 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-864899
	I0816 17:43:05.831123  294136 ssh_runner.go:195] Run: cat /version.json
	I0816 17:43:05.831176  294136 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-864899
	I0816 17:43:05.831477  294136 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0816 17:43:05.831541  294136 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-864899
	I0816 17:43:05.862808  294136 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33140 SSHKeyPath:/home/jenkins/minikube-integration/19461-287979/.minikube/machines/addons-864899/id_rsa Username:docker}
	I0816 17:43:05.874792  294136 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33140 SSHKeyPath:/home/jenkins/minikube-integration/19461-287979/.minikube/machines/addons-864899/id_rsa Username:docker}
	I0816 17:43:05.952655  294136 ssh_runner.go:195] Run: systemctl --version
	I0816 17:43:06.086446  294136 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0816 17:43:06.091157  294136 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0816 17:43:06.117542  294136 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0816 17:43:06.117651  294136 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0816 17:43:06.147281  294136 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0816 17:43:06.147311  294136 start.go:495] detecting cgroup driver to use...
	I0816 17:43:06.147346  294136 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0816 17:43:06.147401  294136 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0816 17:43:06.160181  294136 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0816 17:43:06.171334  294136 docker.go:217] disabling cri-docker service (if available) ...
	I0816 17:43:06.171395  294136 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0816 17:43:06.184949  294136 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0816 17:43:06.199125  294136 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0816 17:43:06.277778  294136 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0816 17:43:06.372606  294136 docker.go:233] disabling docker service ...
	I0816 17:43:06.372673  294136 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0816 17:43:06.393037  294136 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0816 17:43:06.405133  294136 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0816 17:43:06.494148  294136 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0816 17:43:06.582127  294136 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0816 17:43:06.593666  294136 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0816 17:43:06.610164  294136 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0816 17:43:06.619646  294136 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0816 17:43:06.629259  294136 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0816 17:43:06.629375  294136 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0816 17:43:06.638897  294136 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0816 17:43:06.648412  294136 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0816 17:43:06.658247  294136 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0816 17:43:06.667740  294136 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0816 17:43:06.676912  294136 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0816 17:43:06.687541  294136 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0816 17:43:06.697311  294136 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0816 17:43:06.707267  294136 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0816 17:43:06.716046  294136 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0816 17:43:06.724872  294136 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 17:43:06.802215  294136 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0816 17:43:06.938103  294136 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I0816 17:43:06.938240  294136 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0816 17:43:06.941890  294136 start.go:563] Will wait 60s for crictl version
	I0816 17:43:06.941998  294136 ssh_runner.go:195] Run: which crictl
	I0816 17:43:06.945264  294136 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0816 17:43:06.984882  294136 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.7.20
	RuntimeApiVersion:  v1
	I0816 17:43:06.985022  294136 ssh_runner.go:195] Run: containerd --version
	I0816 17:43:07.010874  294136 ssh_runner.go:195] Run: containerd --version
	I0816 17:43:07.034865  294136 out.go:177] * Preparing Kubernetes v1.31.0 on containerd 1.7.20 ...
	I0816 17:43:07.036945  294136 cli_runner.go:164] Run: docker network inspect addons-864899 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0816 17:43:07.055165  294136 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0816 17:43:07.058831  294136 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0816 17:43:07.069785  294136 kubeadm.go:883] updating cluster {Name:addons-864899 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:addons-864899 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Cus
tomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0816 17:43:07.069922  294136 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime containerd
	I0816 17:43:07.069997  294136 ssh_runner.go:195] Run: sudo crictl images --output json
	I0816 17:43:07.108615  294136 containerd.go:627] all images are preloaded for containerd runtime.
	I0816 17:43:07.108639  294136 containerd.go:534] Images already preloaded, skipping extraction
	I0816 17:43:07.108700  294136 ssh_runner.go:195] Run: sudo crictl images --output json
	I0816 17:43:07.146089  294136 containerd.go:627] all images are preloaded for containerd runtime.
	I0816 17:43:07.146111  294136 cache_images.go:84] Images are preloaded, skipping loading
	I0816 17:43:07.146120  294136 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.31.0 containerd true true} ...
	I0816 17:43:07.146228  294136 kubeadm.go:946] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-864899 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:addons-864899 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0816 17:43:07.146299  294136 ssh_runner.go:195] Run: sudo crictl info
	I0816 17:43:07.183127  294136 cni.go:84] Creating CNI manager for ""
	I0816 17:43:07.183156  294136 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0816 17:43:07.183168  294136 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0816 17:43:07.183190  294136 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-864899 NodeName:addons-864899 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0816 17:43:07.183320  294136 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "addons-864899"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0816 17:43:07.183390  294136 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0816 17:43:07.192080  294136 binaries.go:44] Found k8s binaries, skipping transfer
	I0816 17:43:07.192149  294136 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0816 17:43:07.200837  294136 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0816 17:43:07.218669  294136 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0816 17:43:07.236767  294136 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2167 bytes)
	I0816 17:43:07.254594  294136 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0816 17:43:07.258044  294136 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0816 17:43:07.268825  294136 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 17:43:07.348624  294136 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0816 17:43:07.362920  294136 certs.go:68] Setting up /home/jenkins/minikube-integration/19461-287979/.minikube/profiles/addons-864899 for IP: 192.168.49.2
	I0816 17:43:07.362946  294136 certs.go:194] generating shared ca certs ...
	I0816 17:43:07.362963  294136 certs.go:226] acquiring lock for ca certs: {Name:mkc2317239a75a145c30b6075675eef6239ccdc8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 17:43:07.363648  294136 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19461-287979/.minikube/ca.key
	I0816 17:43:08.464424  294136 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19461-287979/.minikube/ca.crt ...
	I0816 17:43:08.464472  294136 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19461-287979/.minikube/ca.crt: {Name:mk537dc62942709e2029b74f613a82975406f674 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 17:43:08.465399  294136 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19461-287979/.minikube/ca.key ...
	I0816 17:43:08.465433  294136 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19461-287979/.minikube/ca.key: {Name:mk3b384f9849db1d7bb6be0e305b1034636ab660 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 17:43:08.465619  294136 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19461-287979/.minikube/proxy-client-ca.key
	I0816 17:43:08.996463  294136 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19461-287979/.minikube/proxy-client-ca.crt ...
	I0816 17:43:08.996499  294136 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19461-287979/.minikube/proxy-client-ca.crt: {Name:mk34e488d0c29c8baa7ba5595bc2af030e5331ca Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 17:43:08.996689  294136 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19461-287979/.minikube/proxy-client-ca.key ...
	I0816 17:43:08.996704  294136 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19461-287979/.minikube/proxy-client-ca.key: {Name:mk46d42a41602bac1e156fc92c642b11d98ecb2d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 17:43:08.996790  294136 certs.go:256] generating profile certs ...
	I0816 17:43:08.996857  294136 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19461-287979/.minikube/profiles/addons-864899/client.key
	I0816 17:43:08.996878  294136 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19461-287979/.minikube/profiles/addons-864899/client.crt with IP's: []
	I0816 17:43:09.350123  294136 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19461-287979/.minikube/profiles/addons-864899/client.crt ...
	I0816 17:43:09.350157  294136 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19461-287979/.minikube/profiles/addons-864899/client.crt: {Name:mk4a7b9e9e1e36e9f37d968a6780d05b80656677 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 17:43:09.350348  294136 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19461-287979/.minikube/profiles/addons-864899/client.key ...
	I0816 17:43:09.350361  294136 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19461-287979/.minikube/profiles/addons-864899/client.key: {Name:mk64bec3fe5b7ab5767edbf4755815fb8beeb424 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 17:43:09.351059  294136 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19461-287979/.minikube/profiles/addons-864899/apiserver.key.0339c74d
	I0816 17:43:09.351085  294136 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19461-287979/.minikube/profiles/addons-864899/apiserver.crt.0339c74d with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I0816 17:43:09.708651  294136 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19461-287979/.minikube/profiles/addons-864899/apiserver.crt.0339c74d ...
	I0816 17:43:09.708685  294136 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19461-287979/.minikube/profiles/addons-864899/apiserver.crt.0339c74d: {Name:mk97c4a9dccda18eeb751d32e8af120b2867143b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 17:43:09.708894  294136 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19461-287979/.minikube/profiles/addons-864899/apiserver.key.0339c74d ...
	I0816 17:43:09.708912  294136 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19461-287979/.minikube/profiles/addons-864899/apiserver.key.0339c74d: {Name:mk20fe521e0a983f42638e04d6364a13c63dbb4e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 17:43:09.709019  294136 certs.go:381] copying /home/jenkins/minikube-integration/19461-287979/.minikube/profiles/addons-864899/apiserver.crt.0339c74d -> /home/jenkins/minikube-integration/19461-287979/.minikube/profiles/addons-864899/apiserver.crt
	I0816 17:43:09.709113  294136 certs.go:385] copying /home/jenkins/minikube-integration/19461-287979/.minikube/profiles/addons-864899/apiserver.key.0339c74d -> /home/jenkins/minikube-integration/19461-287979/.minikube/profiles/addons-864899/apiserver.key
	I0816 17:43:09.709177  294136 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19461-287979/.minikube/profiles/addons-864899/proxy-client.key
	I0816 17:43:09.709198  294136 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19461-287979/.minikube/profiles/addons-864899/proxy-client.crt with IP's: []
	I0816 17:43:11.428494  294136 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19461-287979/.minikube/profiles/addons-864899/proxy-client.crt ...
	I0816 17:43:11.428535  294136 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19461-287979/.minikube/profiles/addons-864899/proxy-client.crt: {Name:mk6f0edee45cc01f1ec1d626bbe68f00eac3f078 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 17:43:11.428733  294136 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19461-287979/.minikube/profiles/addons-864899/proxy-client.key ...
	I0816 17:43:11.428749  294136 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19461-287979/.minikube/profiles/addons-864899/proxy-client.key: {Name:mk8fa2114be03f08c832d5c0480ec7c384df9f52 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 17:43:11.429527  294136 certs.go:484] found cert: /home/jenkins/minikube-integration/19461-287979/.minikube/certs/ca-key.pem (1675 bytes)
	I0816 17:43:11.429577  294136 certs.go:484] found cert: /home/jenkins/minikube-integration/19461-287979/.minikube/certs/ca.pem (1078 bytes)
	I0816 17:43:11.429603  294136 certs.go:484] found cert: /home/jenkins/minikube-integration/19461-287979/.minikube/certs/cert.pem (1123 bytes)
	I0816 17:43:11.429637  294136 certs.go:484] found cert: /home/jenkins/minikube-integration/19461-287979/.minikube/certs/key.pem (1679 bytes)
	I0816 17:43:11.430236  294136 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-287979/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0816 17:43:11.456716  294136 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-287979/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0816 17:43:11.482160  294136 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-287979/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0816 17:43:11.506427  294136 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-287979/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0816 17:43:11.530426  294136 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-287979/.minikube/profiles/addons-864899/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0816 17:43:11.554299  294136 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-287979/.minikube/profiles/addons-864899/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0816 17:43:11.578433  294136 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-287979/.minikube/profiles/addons-864899/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0816 17:43:11.602661  294136 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-287979/.minikube/profiles/addons-864899/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0816 17:43:11.626976  294136 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-287979/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0816 17:43:11.650911  294136 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0816 17:43:11.668401  294136 ssh_runner.go:195] Run: openssl version
	I0816 17:43:11.673779  294136 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0816 17:43:11.683212  294136 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0816 17:43:11.686751  294136 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 16 17:43 /usr/share/ca-certificates/minikubeCA.pem
	I0816 17:43:11.686815  294136 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0816 17:43:11.693691  294136 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0816 17:43:11.703143  294136 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0816 17:43:11.706669  294136 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0816 17:43:11.706717  294136 kubeadm.go:392] StartCluster: {Name:addons-864899 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:addons-864899 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Custom
QemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0816 17:43:11.706803  294136 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0816 17:43:11.706860  294136 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0816 17:43:11.743100  294136 cri.go:89] found id: ""
	I0816 17:43:11.743209  294136 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0816 17:43:11.751900  294136 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0816 17:43:11.761407  294136 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0816 17:43:11.761477  294136 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0816 17:43:11.770868  294136 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0816 17:43:11.770890  294136 kubeadm.go:157] found existing configuration files:
	
	I0816 17:43:11.770957  294136 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0816 17:43:11.780894  294136 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0816 17:43:11.780966  294136 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0816 17:43:11.789494  294136 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0816 17:43:11.798436  294136 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0816 17:43:11.798500  294136 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0816 17:43:11.806897  294136 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0816 17:43:11.815503  294136 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0816 17:43:11.815592  294136 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0816 17:43:11.824008  294136 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0816 17:43:11.832583  294136 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0816 17:43:11.832648  294136 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0816 17:43:11.841123  294136 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0816 17:43:11.883802  294136 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0
	I0816 17:43:11.884016  294136 kubeadm.go:310] [preflight] Running pre-flight checks
	I0816 17:43:11.911784  294136 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0816 17:43:11.911898  294136 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1067-aws
	I0816 17:43:11.911957  294136 kubeadm.go:310] OS: Linux
	I0816 17:43:11.912020  294136 kubeadm.go:310] CGROUPS_CPU: enabled
	I0816 17:43:11.912092  294136 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I0816 17:43:11.912195  294136 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0816 17:43:11.912279  294136 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0816 17:43:11.912357  294136 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0816 17:43:11.912436  294136 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0816 17:43:11.912514  294136 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0816 17:43:11.912593  294136 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0816 17:43:11.912671  294136 kubeadm.go:310] CGROUPS_BLKIO: enabled
	I0816 17:43:11.989706  294136 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0816 17:43:11.989876  294136 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0816 17:43:11.990009  294136 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0816 17:43:11.995029  294136 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0816 17:43:11.997602  294136 out.go:235]   - Generating certificates and keys ...
	I0816 17:43:11.997733  294136 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0816 17:43:11.997810  294136 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0816 17:43:12.337805  294136 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0816 17:43:12.819617  294136 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0816 17:43:12.977397  294136 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0816 17:43:13.331177  294136 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0816 17:43:13.595515  294136 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0816 17:43:13.595692  294136 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-864899 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0816 17:43:14.148832  294136 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0816 17:43:14.149173  294136 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-864899 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0816 17:43:14.577318  294136 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0816 17:43:15.591455  294136 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0816 17:43:16.102932  294136 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0816 17:43:16.103302  294136 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0816 17:43:16.585985  294136 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0816 17:43:16.827199  294136 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0816 17:43:17.106656  294136 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0816 17:43:17.753847  294136 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0816 17:43:18.455712  294136 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0816 17:43:18.456402  294136 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0816 17:43:18.459480  294136 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0816 17:43:18.461874  294136 out.go:235]   - Booting up control plane ...
	I0816 17:43:18.461970  294136 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0816 17:43:18.462051  294136 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0816 17:43:18.462776  294136 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0816 17:43:18.473694  294136 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0816 17:43:18.480476  294136 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0816 17:43:18.480667  294136 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0816 17:43:18.581496  294136 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0816 17:43:18.581618  294136 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0816 17:43:19.583385  294136 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.003730789s
	I0816 17:43:19.583476  294136 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0816 17:43:25.584730  294136 kubeadm.go:310] [api-check] The API server is healthy after 6.001349821s
	I0816 17:43:25.604653  294136 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0816 17:43:25.617125  294136 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0816 17:43:25.640612  294136 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0816 17:43:25.640804  294136 kubeadm.go:310] [mark-control-plane] Marking the node addons-864899 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0816 17:43:25.653642  294136 kubeadm.go:310] [bootstrap-token] Using token: fa8lgr.akictnhapdddpt18
	I0816 17:43:25.655792  294136 out.go:235]   - Configuring RBAC rules ...
	I0816 17:43:25.655920  294136 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0816 17:43:25.660750  294136 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0816 17:43:25.670899  294136 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0816 17:43:25.674405  294136 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0816 17:43:25.677725  294136 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0816 17:43:25.681319  294136 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0816 17:43:25.991672  294136 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0816 17:43:26.418076  294136 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0816 17:43:26.991728  294136 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0816 17:43:26.992820  294136 kubeadm.go:310] 
	I0816 17:43:26.992891  294136 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0816 17:43:26.992898  294136 kubeadm.go:310] 
	I0816 17:43:26.992973  294136 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0816 17:43:26.992978  294136 kubeadm.go:310] 
	I0816 17:43:26.993023  294136 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0816 17:43:26.993081  294136 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0816 17:43:26.993131  294136 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0816 17:43:26.993136  294136 kubeadm.go:310] 
	I0816 17:43:26.993196  294136 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0816 17:43:26.993201  294136 kubeadm.go:310] 
	I0816 17:43:26.993247  294136 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0816 17:43:26.993251  294136 kubeadm.go:310] 
	I0816 17:43:26.993300  294136 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0816 17:43:26.993372  294136 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0816 17:43:26.993437  294136 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0816 17:43:26.993442  294136 kubeadm.go:310] 
	I0816 17:43:26.993523  294136 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0816 17:43:26.993596  294136 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0816 17:43:26.993600  294136 kubeadm.go:310] 
	I0816 17:43:26.993681  294136 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token fa8lgr.akictnhapdddpt18 \
	I0816 17:43:26.993780  294136 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:5d30401e9108382eeb8fe5c377a6500b3dde6e64fdfafc41c12fd0988d4703e6 \
	I0816 17:43:26.993800  294136 kubeadm.go:310] 	--control-plane 
	I0816 17:43:26.993804  294136 kubeadm.go:310] 
	I0816 17:43:26.993886  294136 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0816 17:43:26.993890  294136 kubeadm.go:310] 
	I0816 17:43:26.993969  294136 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token fa8lgr.akictnhapdddpt18 \
	I0816 17:43:26.994066  294136 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:5d30401e9108382eeb8fe5c377a6500b3dde6e64fdfafc41c12fd0988d4703e6 
	I0816 17:43:26.998705  294136 kubeadm.go:310] W0816 17:43:11.880260    1040 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0816 17:43:26.999008  294136 kubeadm.go:310] W0816 17:43:11.881312    1040 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0816 17:43:26.999213  294136 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1067-aws\n", err: exit status 1
	I0816 17:43:26.999315  294136 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0816 17:43:26.999331  294136 cni.go:84] Creating CNI manager for ""
	I0816 17:43:26.999340  294136 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0816 17:43:27.003170  294136 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0816 17:43:27.005705  294136 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0816 17:43:27.010473  294136 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.0/kubectl ...
	I0816 17:43:27.010496  294136 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0816 17:43:27.031077  294136 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0816 17:43:27.314475  294136 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0816 17:43:27.314551  294136 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 17:43:27.314614  294136 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-864899 minikube.k8s.io/updated_at=2024_08_16T17_43_27_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=8789c54b9bc6db8e66c461a83302d5a0be0abbdd minikube.k8s.io/name=addons-864899 minikube.k8s.io/primary=true
	I0816 17:43:27.331163  294136 ops.go:34] apiserver oom_adj: -16
	I0816 17:43:27.463722  294136 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 17:43:27.963885  294136 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 17:43:28.464809  294136 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 17:43:28.964820  294136 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 17:43:29.464096  294136 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 17:43:29.963867  294136 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 17:43:30.463971  294136 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 17:43:30.963899  294136 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 17:43:31.104686  294136 kubeadm.go:1113] duration metric: took 3.790207614s to wait for elevateKubeSystemPrivileges
	I0816 17:43:31.104720  294136 kubeadm.go:394] duration metric: took 19.398006301s to StartCluster
	I0816 17:43:31.104739  294136 settings.go:142] acquiring lock: {Name:mke5f8bb0a9e0ea5bfe13ebba62cb869c1a95955 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 17:43:31.104875  294136 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19461-287979/kubeconfig
	I0816 17:43:31.105277  294136 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19461-287979/kubeconfig: {Name:mkf88e71d9d88c4917ceda8d8c4a2c6c3a01b716 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 17:43:31.105493  294136 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0816 17:43:31.105637  294136 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0816 17:43:31.105918  294136 config.go:182] Loaded profile config "addons-864899": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.0
	I0816 17:43:31.105957  294136 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0816 17:43:31.106080  294136 addons.go:69] Setting yakd=true in profile "addons-864899"
	I0816 17:43:31.106104  294136 addons.go:234] Setting addon yakd=true in "addons-864899"
	I0816 17:43:31.106135  294136 host.go:66] Checking if "addons-864899" exists ...
	I0816 17:43:31.106619  294136 cli_runner.go:164] Run: docker container inspect addons-864899 --format={{.State.Status}}
	I0816 17:43:31.107154  294136 addons.go:69] Setting metrics-server=true in profile "addons-864899"
	I0816 17:43:31.107192  294136 addons.go:234] Setting addon metrics-server=true in "addons-864899"
	I0816 17:43:31.107225  294136 host.go:66] Checking if "addons-864899" exists ...
	I0816 17:43:31.107658  294136 cli_runner.go:164] Run: docker container inspect addons-864899 --format={{.State.Status}}
	I0816 17:43:31.107834  294136 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-864899"
	I0816 17:43:31.107857  294136 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-864899"
	I0816 17:43:31.107892  294136 host.go:66] Checking if "addons-864899" exists ...
	I0816 17:43:31.108372  294136 cli_runner.go:164] Run: docker container inspect addons-864899 --format={{.State.Status}}
	I0816 17:43:31.110492  294136 addons.go:69] Setting cloud-spanner=true in profile "addons-864899"
	I0816 17:43:31.110539  294136 addons.go:234] Setting addon cloud-spanner=true in "addons-864899"
	I0816 17:43:31.110573  294136 host.go:66] Checking if "addons-864899" exists ...
	I0816 17:43:31.110841  294136 addons.go:69] Setting registry=true in profile "addons-864899"
	I0816 17:43:31.110911  294136 addons.go:234] Setting addon registry=true in "addons-864899"
	I0816 17:43:31.110987  294136 host.go:66] Checking if "addons-864899" exists ...
	I0816 17:43:31.111051  294136 addons.go:69] Setting gcp-auth=true in profile "addons-864899"
	I0816 17:43:31.111087  294136 mustload.go:65] Loading cluster: addons-864899
	I0816 17:43:31.111242  294136 config.go:182] Loaded profile config "addons-864899": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.0
	I0816 17:43:31.111452  294136 cli_runner.go:164] Run: docker container inspect addons-864899 --format={{.State.Status}}
	I0816 17:43:31.111034  294136 cli_runner.go:164] Run: docker container inspect addons-864899 --format={{.State.Status}}
	I0816 17:43:31.121078  294136 addons.go:69] Setting storage-provisioner=true in profile "addons-864899"
	I0816 17:43:31.121184  294136 addons.go:234] Setting addon storage-provisioner=true in "addons-864899"
	I0816 17:43:31.121264  294136 host.go:66] Checking if "addons-864899" exists ...
	I0816 17:43:31.121832  294136 cli_runner.go:164] Run: docker container inspect addons-864899 --format={{.State.Status}}
	I0816 17:43:31.122126  294136 addons.go:69] Setting ingress=true in profile "addons-864899"
	I0816 17:43:31.122160  294136 addons.go:234] Setting addon ingress=true in "addons-864899"
	I0816 17:43:31.122196  294136 host.go:66] Checking if "addons-864899" exists ...
	I0816 17:43:31.122580  294136 cli_runner.go:164] Run: docker container inspect addons-864899 --format={{.State.Status}}
	I0816 17:43:31.134235  294136 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-864899"
	I0816 17:43:31.134279  294136 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-864899"
	I0816 17:43:31.134612  294136 cli_runner.go:164] Run: docker container inspect addons-864899 --format={{.State.Status}}
	I0816 17:43:31.154073  294136 addons.go:69] Setting ingress-dns=true in profile "addons-864899"
	I0816 17:43:31.154173  294136 addons.go:234] Setting addon ingress-dns=true in "addons-864899"
	I0816 17:43:31.154251  294136 host.go:66] Checking if "addons-864899" exists ...
	I0816 17:43:31.154696  294136 addons.go:69] Setting volcano=true in profile "addons-864899"
	I0816 17:43:31.154720  294136 addons.go:234] Setting addon volcano=true in "addons-864899"
	I0816 17:43:31.154747  294136 host.go:66] Checking if "addons-864899" exists ...
	I0816 17:43:31.155257  294136 cli_runner.go:164] Run: docker container inspect addons-864899 --format={{.State.Status}}
	I0816 17:43:31.166178  294136 cli_runner.go:164] Run: docker container inspect addons-864899 --format={{.State.Status}}
	I0816 17:43:31.168520  294136 addons.go:69] Setting volumesnapshots=true in profile "addons-864899"
	I0816 17:43:31.168608  294136 addons.go:234] Setting addon volumesnapshots=true in "addons-864899"
	I0816 17:43:31.168685  294136 host.go:66] Checking if "addons-864899" exists ...
	I0816 17:43:31.169359  294136 cli_runner.go:164] Run: docker container inspect addons-864899 --format={{.State.Status}}
	I0816 17:43:31.183865  294136 addons.go:69] Setting inspektor-gadget=true in profile "addons-864899"
	I0816 17:43:31.183967  294136 addons.go:234] Setting addon inspektor-gadget=true in "addons-864899"
	I0816 17:43:31.184048  294136 host.go:66] Checking if "addons-864899" exists ...
	I0816 17:43:31.184782  294136 cli_runner.go:164] Run: docker container inspect addons-864899 --format={{.State.Status}}
	I0816 17:43:31.111041  294136 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-864899"
	I0816 17:43:31.202007  294136 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-864899"
	I0816 17:43:31.202082  294136 host.go:66] Checking if "addons-864899" exists ...
	I0816 17:43:31.202700  294136 cli_runner.go:164] Run: docker container inspect addons-864899 --format={{.State.Status}}
	I0816 17:43:31.208934  294136 out.go:177] * Verifying Kubernetes components...
	I0816 17:43:31.213078  294136 cli_runner.go:164] Run: docker container inspect addons-864899 --format={{.State.Status}}
	I0816 17:43:31.111047  294136 addons.go:69] Setting default-storageclass=true in profile "addons-864899"
	I0816 17:43:31.217313  294136 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-864899"
	I0816 17:43:31.217650  294136 cli_runner.go:164] Run: docker container inspect addons-864899 --format={{.State.Status}}
	I0816 17:43:31.302501  294136 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0816 17:43:31.302598  294136 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	I0816 17:43:31.305043  294136 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0816 17:43:31.305064  294136 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0816 17:43:31.305142  294136 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-864899
	I0816 17:43:31.316077  294136 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 17:43:31.330913  294136 host.go:66] Checking if "addons-864899" exists ...
	I0816 17:43:31.335342  294136 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0816 17:43:31.335361  294136 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0816 17:43:31.335421  294136 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-864899
	I0816 17:43:31.343315  294136 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.1
	I0816 17:43:31.360237  294136 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0816 17:43:31.362579  294136 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0816 17:43:31.362625  294136 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0816 17:43:31.362716  294136 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-864899
	I0816 17:43:31.380793  294136 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.2
	I0816 17:43:31.386791  294136 out.go:177]   - Using image docker.io/volcanosh/vc-controller-manager:v1.9.0
	I0816 17:43:31.390054  294136 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-864899"
	I0816 17:43:31.390105  294136 host.go:66] Checking if "addons-864899" exists ...
	I0816 17:43:31.390531  294136 cli_runner.go:164] Run: docker container inspect addons-864899 --format={{.State.Status}}
	I0816 17:43:31.395849  294136 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0816 17:43:31.398398  294136 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0816 17:43:31.398419  294136 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0816 17:43:31.398509  294136 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-864899
	I0816 17:43:31.398517  294136 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.22
	I0816 17:43:31.399831  294136 addons.go:234] Setting addon default-storageclass=true in "addons-864899"
	I0816 17:43:31.399910  294136 out.go:177]   - Using image docker.io/registry:2.8.3
	I0816 17:43:31.404669  294136 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.31.0
	I0816 17:43:31.418024  294136 host.go:66] Checking if "addons-864899" exists ...
	I0816 17:43:31.418679  294136 cli_runner.go:164] Run: docker container inspect addons-864899 --format={{.State.Status}}
	I0816 17:43:31.423508  294136 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0816 17:43:31.423528  294136 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0816 17:43:31.423596  294136 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-864899
	I0816 17:43:31.435626  294136 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0816 17:43:31.440700  294136 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0816 17:43:31.440767  294136 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0816 17:43:31.440877  294136 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-864899
	I0816 17:43:31.455052  294136 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0816 17:43:31.455075  294136 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0816 17:43:31.455143  294136 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-864899
	I0816 17:43:31.457495  294136 out.go:177]   - Using image docker.io/volcanosh/vc-scheduler:v1.9.0
	I0816 17:43:31.480515  294136 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0816 17:43:31.489816  294136 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0816 17:43:31.514883  294136 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33140 SSHKeyPath:/home/jenkins/minikube-integration/19461-287979/.minikube/machines/addons-864899/id_rsa Username:docker}
	I0816 17:43:31.515097  294136 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0816 17:43:31.515353  294136 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0816 17:43:31.515405  294136 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0816 17:43:31.523988  294136 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0816 17:43:31.529353  294136 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0816 17:43:31.529380  294136 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0816 17:43:31.529455  294136 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-864899
	I0816 17:43:31.529631  294136 out.go:177]   - Using image docker.io/volcanosh/vc-webhook-manager:v1.9.0
	I0816 17:43:31.532522  294136 addons.go:431] installing /etc/kubernetes/addons/volcano-deployment.yaml
	I0816 17:43:31.542713  294136 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volcano-deployment.yaml (434001 bytes)
	I0816 17:43:31.540601  294136 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0816 17:43:31.540612  294136 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0816 17:43:31.543151  294136 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-864899
	I0816 17:43:31.542936  294136 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-864899
	I0816 17:43:31.543079  294136 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-864899
	I0816 17:43:31.546898  294136 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0816 17:43:31.563514  294136 out.go:177]   - Using image docker.io/busybox:stable
	I0816 17:43:31.569084  294136 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0816 17:43:31.569335  294136 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0816 17:43:31.569348  294136 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0816 17:43:31.569414  294136 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-864899
	I0816 17:43:31.569877  294136 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33140 SSHKeyPath:/home/jenkins/minikube-integration/19461-287979/.minikube/machines/addons-864899/id_rsa Username:docker}
	I0816 17:43:31.571128  294136 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33140 SSHKeyPath:/home/jenkins/minikube-integration/19461-287979/.minikube/machines/addons-864899/id_rsa Username:docker}
	I0816 17:43:31.573393  294136 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0816 17:43:31.573409  294136 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0816 17:43:31.573461  294136 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-864899
	I0816 17:43:31.595774  294136 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0816 17:43:31.600072  294136 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0816 17:43:31.602092  294136 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0816 17:43:31.605091  294136 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0816 17:43:31.607264  294136 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0816 17:43:31.609082  294136 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0816 17:43:31.612950  294136 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33140 SSHKeyPath:/home/jenkins/minikube-integration/19461-287979/.minikube/machines/addons-864899/id_rsa Username:docker}
	I0816 17:43:31.613780  294136 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0816 17:43:31.613795  294136 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0816 17:43:31.613866  294136 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-864899
	I0816 17:43:31.621969  294136 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33140 SSHKeyPath:/home/jenkins/minikube-integration/19461-287979/.minikube/machines/addons-864899/id_rsa Username:docker}
	I0816 17:43:31.681445  294136 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33140 SSHKeyPath:/home/jenkins/minikube-integration/19461-287979/.minikube/machines/addons-864899/id_rsa Username:docker}
	I0816 17:43:31.682029  294136 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33140 SSHKeyPath:/home/jenkins/minikube-integration/19461-287979/.minikube/machines/addons-864899/id_rsa Username:docker}
	I0816 17:43:31.697071  294136 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33140 SSHKeyPath:/home/jenkins/minikube-integration/19461-287979/.minikube/machines/addons-864899/id_rsa Username:docker}
	I0816 17:43:31.728411  294136 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33140 SSHKeyPath:/home/jenkins/minikube-integration/19461-287979/.minikube/machines/addons-864899/id_rsa Username:docker}
	I0816 17:43:31.751321  294136 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33140 SSHKeyPath:/home/jenkins/minikube-integration/19461-287979/.minikube/machines/addons-864899/id_rsa Username:docker}
	I0816 17:43:31.756252  294136 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33140 SSHKeyPath:/home/jenkins/minikube-integration/19461-287979/.minikube/machines/addons-864899/id_rsa Username:docker}
	I0816 17:43:31.761734  294136 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33140 SSHKeyPath:/home/jenkins/minikube-integration/19461-287979/.minikube/machines/addons-864899/id_rsa Username:docker}
	I0816 17:43:31.768642  294136 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33140 SSHKeyPath:/home/jenkins/minikube-integration/19461-287979/.minikube/machines/addons-864899/id_rsa Username:docker}
	I0816 17:43:31.770473  294136 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33140 SSHKeyPath:/home/jenkins/minikube-integration/19461-287979/.minikube/machines/addons-864899/id_rsa Username:docker}
	W0816 17:43:31.782161  294136 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0816 17:43:31.782199  294136 retry.go:31] will retry after 307.654354ms: ssh: handshake failed: EOF
	W0816 17:43:31.782649  294136 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0816 17:43:31.782671  294136 retry.go:31] will retry after 210.677735ms: ssh: handshake failed: EOF
	I0816 17:43:31.950727  294136 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0816 17:43:31.950847  294136 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0816 17:43:32.326972  294136 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0816 17:43:32.479325  294136 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0816 17:43:32.479354  294136 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0816 17:43:32.492625  294136 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0816 17:43:32.492660  294136 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0816 17:43:32.499702  294136 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0816 17:43:32.499730  294136 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0816 17:43:32.530845  294136 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0816 17:43:32.575057  294136 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0816 17:43:32.591694  294136 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0816 17:43:32.591721  294136 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0816 17:43:32.609532  294136 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0816 17:43:32.609560  294136 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0816 17:43:32.611918  294136 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0816 17:43:32.635751  294136 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0816 17:43:32.722343  294136 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0816 17:43:32.731298  294136 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0816 17:43:32.731324  294136 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0816 17:43:32.745561  294136 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0816 17:43:32.745589  294136 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0816 17:43:32.761957  294136 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0816 17:43:32.761984  294136 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0816 17:43:32.764783  294136 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0816 17:43:32.764819  294136 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0816 17:43:32.796776  294136 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml
	I0816 17:43:32.802575  294136 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0816 17:43:32.802611  294136 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0816 17:43:32.913765  294136 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0816 17:43:32.913803  294136 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0816 17:43:32.952138  294136 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0816 17:43:32.952175  294136 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0816 17:43:32.956822  294136 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0816 17:43:32.963898  294136 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0816 17:43:32.963939  294136 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0816 17:43:33.089246  294136 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0816 17:43:33.089272  294136 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0816 17:43:33.098808  294136 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0816 17:43:33.194453  294136 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0816 17:43:33.194530  294136 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0816 17:43:33.259001  294136 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0816 17:43:33.314859  294136 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0816 17:43:33.314936  294136 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0816 17:43:33.382779  294136 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0816 17:43:33.382849  294136 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0816 17:43:33.451000  294136 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0816 17:43:33.451066  294136 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0816 17:43:33.637428  294136 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0816 17:43:33.637503  294136 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0816 17:43:33.755737  294136 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0816 17:43:33.755813  294136 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0816 17:43:33.817693  294136 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0816 17:43:33.817771  294136 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0816 17:43:33.853296  294136 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0816 17:43:34.122673  294136 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0816 17:43:34.122758  294136 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0816 17:43:34.198305  294136 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (2.247344259s)
	I0816 17:43:34.198390  294136 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.247507655s)
	I0816 17:43:34.198561  294136 start.go:971] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0816 17:43:34.198456  294136 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.871461949s)
	I0816 17:43:34.200341  294136 node_ready.go:35] waiting up to 6m0s for node "addons-864899" to be "Ready" ...
	I0816 17:43:34.205263  294136 node_ready.go:49] node "addons-864899" has status "Ready":"True"
	I0816 17:43:34.205287  294136 node_ready.go:38] duration metric: took 4.801914ms for node "addons-864899" to be "Ready" ...
	I0816 17:43:34.205295  294136 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 17:43:34.219297  294136 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-lls4v" in "kube-system" namespace to be "Ready" ...
	I0816 17:43:34.299357  294136 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0816 17:43:34.299428  294136 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0816 17:43:34.351555  294136 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0816 17:43:34.351634  294136 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0816 17:43:34.663238  294136 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0816 17:43:34.663310  294136 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0816 17:43:34.702813  294136 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-864899" context rescaled to 1 replicas
	I0816 17:43:34.776486  294136 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0816 17:43:34.776562  294136 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0816 17:43:34.814176  294136 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0816 17:43:34.979480  294136 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0816 17:43:34.979562  294136 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0816 17:43:35.158634  294136 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0816 17:43:35.158712  294136 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0816 17:43:35.368604  294136 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0816 17:43:35.418496  294136 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0816 17:43:35.418559  294136 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0816 17:43:35.653236  294136 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0816 17:43:35.653256  294136 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0816 17:43:36.004333  294136 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0816 17:43:36.004429  294136 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0816 17:43:36.266959  294136 pod_ready.go:103] pod "coredns-6f6b679f8f-lls4v" in "kube-system" namespace has status "Ready":"False"
	I0816 17:43:36.474710  294136 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0816 17:43:38.555431  294136 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0816 17:43:38.555563  294136 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-864899
	I0816 17:43:38.589211  294136 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33140 SSHKeyPath:/home/jenkins/minikube-integration/19461-287979/.minikube/machines/addons-864899/id_rsa Username:docker}
	I0816 17:43:38.777941  294136 pod_ready.go:103] pod "coredns-6f6b679f8f-lls4v" in "kube-system" namespace has status "Ready":"False"
	I0816 17:43:39.302783  294136 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0816 17:43:39.421836  294136 addons.go:234] Setting addon gcp-auth=true in "addons-864899"
	I0816 17:43:39.421927  294136 host.go:66] Checking if "addons-864899" exists ...
	I0816 17:43:39.422395  294136 cli_runner.go:164] Run: docker container inspect addons-864899 --format={{.State.Status}}
	I0816 17:43:39.445214  294136 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0816 17:43:39.445273  294136 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-864899
	I0816 17:43:39.467643  294136 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33140 SSHKeyPath:/home/jenkins/minikube-integration/19461-287979/.minikube/machines/addons-864899/id_rsa Username:docker}
	I0816 17:43:40.219149  294136 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (7.688264503s)
	I0816 17:43:40.219224  294136 addons.go:475] Verifying addon ingress=true in "addons-864899"
	I0816 17:43:40.219432  294136 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (7.644348518s)
	I0816 17:43:40.219515  294136 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (7.607576327s)
	I0816 17:43:40.219585  294136 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (7.583811405s)
	I0816 17:43:40.219763  294136 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (7.497394623s)
	I0816 17:43:40.222254  294136 out.go:177] * Verifying ingress addon...
	I0816 17:43:40.225641  294136 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0816 17:43:40.236059  294136 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0816 17:43:40.236122  294136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:43:40.731177  294136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:43:41.293578  294136 pod_ready.go:103] pod "coredns-6f6b679f8f-lls4v" in "kube-system" namespace has status "Ready":"False"
	I0816 17:43:41.300918  294136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:43:41.810955  294136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:43:42.274759  294136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:43:42.291744  294136 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml: (9.494931677s)
	I0816 17:43:42.291829  294136 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (9.334981017s)
	I0816 17:43:42.291922  294136 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (9.193035059s)
	I0816 17:43:42.291940  294136 addons.go:475] Verifying addon registry=true in "addons-864899"
	I0816 17:43:42.292116  294136 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (9.033036923s)
	I0816 17:43:42.292142  294136 addons.go:475] Verifying addon metrics-server=true in "addons-864899"
	I0816 17:43:42.292187  294136 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (8.438815869s)
	I0816 17:43:42.292455  294136 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (7.478194226s)
	W0816 17:43:42.292500  294136 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0816 17:43:42.292521  294136 retry.go:31] will retry after 221.089603ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0816 17:43:42.292595  294136 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (6.923915366s)
	I0816 17:43:42.293998  294136 out.go:177] * Verifying registry addon...
	I0816 17:43:42.294190  294136 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-864899 service yakd-dashboard -n yakd-dashboard
	
	I0816 17:43:42.296905  294136 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0816 17:43:42.353574  294136 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0816 17:43:42.353603  294136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 17:43:42.514540  294136 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0816 17:43:42.755119  294136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:43:42.819672  294136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 17:43:43.025158  294136 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (6.55035348s)
	I0816 17:43:43.025311  294136 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-864899"
	I0816 17:43:43.025262  294136 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (3.580026199s)
	I0816 17:43:43.028470  294136 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0816 17:43:43.028485  294136 out.go:177] * Verifying csi-hostpath-driver addon...
	I0816 17:43:43.031564  294136 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0816 17:43:43.032376  294136 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0816 17:43:43.033767  294136 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0816 17:43:43.033796  294136 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0816 17:43:43.062014  294136 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0816 17:43:43.062098  294136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:43:43.156214  294136 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0816 17:43:43.156281  294136 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0816 17:43:43.225408  294136 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0816 17:43:43.225480  294136 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0816 17:43:43.231087  294136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:43:43.309495  294136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 17:43:43.318760  294136 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0816 17:43:43.538953  294136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:43:43.736103  294136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:43:43.738962  294136 pod_ready.go:103] pod "coredns-6f6b679f8f-lls4v" in "kube-system" namespace has status "Ready":"False"
	I0816 17:43:43.801650  294136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 17:43:44.039059  294136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:43:44.255778  294136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:43:44.342664  294136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 17:43:44.353083  294136 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.83849176s)
	I0816 17:43:44.353206  294136 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.034419143s)
	I0816 17:43:44.356685  294136 addons.go:475] Verifying addon gcp-auth=true in "addons-864899"
	I0816 17:43:44.359391  294136 out.go:177] * Verifying gcp-auth addon...
	I0816 17:43:44.362121  294136 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0816 17:43:44.364654  294136 kapi.go:86] Found 0 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0816 17:43:44.538306  294136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:43:44.731368  294136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:43:44.801344  294136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 17:43:45.044169  294136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:43:45.236902  294136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:43:45.337866  294136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 17:43:45.538263  294136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:43:45.733021  294136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:43:45.800746  294136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 17:43:46.038197  294136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:43:46.234904  294136 pod_ready.go:103] pod "coredns-6f6b679f8f-lls4v" in "kube-system" namespace has status "Ready":"False"
	I0816 17:43:46.237846  294136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:43:46.301524  294136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 17:43:46.538609  294136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:43:46.734366  294136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:43:46.803654  294136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 17:43:47.037415  294136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:43:47.241789  294136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:43:47.337513  294136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 17:43:47.546535  294136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:43:47.725964  294136 pod_ready.go:93] pod "coredns-6f6b679f8f-lls4v" in "kube-system" namespace has status "Ready":"True"
	I0816 17:43:47.726029  294136 pod_ready.go:82] duration metric: took 13.506645446s for pod "coredns-6f6b679f8f-lls4v" in "kube-system" namespace to be "Ready" ...
	I0816 17:43:47.726056  294136 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-mrpg2" in "kube-system" namespace to be "Ready" ...
	I0816 17:43:47.729407  294136 pod_ready.go:98] error getting pod "coredns-6f6b679f8f-mrpg2" in "kube-system" namespace (skipping!): pods "coredns-6f6b679f8f-mrpg2" not found
	I0816 17:43:47.729482  294136 pod_ready.go:82] duration metric: took 3.404965ms for pod "coredns-6f6b679f8f-mrpg2" in "kube-system" namespace to be "Ready" ...
	E0816 17:43:47.729508  294136 pod_ready.go:67] WaitExtra: waitPodCondition: error getting pod "coredns-6f6b679f8f-mrpg2" in "kube-system" namespace (skipping!): pods "coredns-6f6b679f8f-mrpg2" not found
	I0816 17:43:47.729528  294136 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-864899" in "kube-system" namespace to be "Ready" ...
	I0816 17:43:47.734422  294136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:43:47.736873  294136 pod_ready.go:93] pod "etcd-addons-864899" in "kube-system" namespace has status "Ready":"True"
	I0816 17:43:47.736934  294136 pod_ready.go:82] duration metric: took 7.366293ms for pod "etcd-addons-864899" in "kube-system" namespace to be "Ready" ...
	I0816 17:43:47.736963  294136 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-864899" in "kube-system" namespace to be "Ready" ...
	I0816 17:43:47.742232  294136 pod_ready.go:93] pod "kube-apiserver-addons-864899" in "kube-system" namespace has status "Ready":"True"
	I0816 17:43:47.742297  294136 pod_ready.go:82] duration metric: took 5.313641ms for pod "kube-apiserver-addons-864899" in "kube-system" namespace to be "Ready" ...
	I0816 17:43:47.742321  294136 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-864899" in "kube-system" namespace to be "Ready" ...
	I0816 17:43:47.748345  294136 pod_ready.go:93] pod "kube-controller-manager-addons-864899" in "kube-system" namespace has status "Ready":"True"
	I0816 17:43:47.748414  294136 pod_ready.go:82] duration metric: took 6.071316ms for pod "kube-controller-manager-addons-864899" in "kube-system" namespace to be "Ready" ...
	I0816 17:43:47.748439  294136 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-kjxrw" in "kube-system" namespace to be "Ready" ...
	I0816 17:43:47.801153  294136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 17:43:47.923437  294136 pod_ready.go:93] pod "kube-proxy-kjxrw" in "kube-system" namespace has status "Ready":"True"
	I0816 17:43:47.923509  294136 pod_ready.go:82] duration metric: took 175.049529ms for pod "kube-proxy-kjxrw" in "kube-system" namespace to be "Ready" ...
	I0816 17:43:47.923533  294136 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-864899" in "kube-system" namespace to be "Ready" ...
	I0816 17:43:48.037819  294136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:43:48.231380  294136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:43:48.301118  294136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 17:43:48.323548  294136 pod_ready.go:93] pod "kube-scheduler-addons-864899" in "kube-system" namespace has status "Ready":"True"
	I0816 17:43:48.323621  294136 pod_ready.go:82] duration metric: took 400.067407ms for pod "kube-scheduler-addons-864899" in "kube-system" namespace to be "Ready" ...
	I0816 17:43:48.323647  294136 pod_ready.go:39] duration metric: took 14.118338873s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 17:43:48.323689  294136 api_server.go:52] waiting for apiserver process to appear ...
	I0816 17:43:48.323766  294136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 17:43:48.336515  294136 api_server.go:72] duration metric: took 17.230981892s to wait for apiserver process to appear ...
	I0816 17:43:48.336579  294136 api_server.go:88] waiting for apiserver healthz status ...
	I0816 17:43:48.336613  294136 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0816 17:43:48.344266  294136 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0816 17:43:48.345501  294136 api_server.go:141] control plane version: v1.31.0
	I0816 17:43:48.345555  294136 api_server.go:131] duration metric: took 8.955857ms to wait for apiserver health ...
	I0816 17:43:48.345579  294136 system_pods.go:43] waiting for kube-system pods to appear ...
	I0816 17:43:48.530009  294136 system_pods.go:59] 18 kube-system pods found
	I0816 17:43:48.530046  294136 system_pods.go:61] "coredns-6f6b679f8f-lls4v" [4d9b5208-3f7d-4748-9b64-d0d51b8a759b] Running
	I0816 17:43:48.530057  294136 system_pods.go:61] "csi-hostpath-attacher-0" [c78df4bb-8609-488d-ada2-9f65b23d5b99] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0816 17:43:48.530066  294136 system_pods.go:61] "csi-hostpath-resizer-0" [b540c0f9-77a8-4718-aec9-fc87bc84aa2f] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0816 17:43:48.530075  294136 system_pods.go:61] "csi-hostpathplugin-qtzcx" [a668f80d-6b61-4306-8db0-6820fd445cee] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0816 17:43:48.530081  294136 system_pods.go:61] "etcd-addons-864899" [aa4a09f3-6eff-400e-a840-29260e895a0c] Running
	I0816 17:43:48.530094  294136 system_pods.go:61] "kindnet-mcs5f" [78fe4c47-0645-40bf-9c17-5b60270e7389] Running
	I0816 17:43:48.530099  294136 system_pods.go:61] "kube-apiserver-addons-864899" [b3da2fd2-d299-4c50-abe3-1b6780c5b123] Running
	I0816 17:43:48.530111  294136 system_pods.go:61] "kube-controller-manager-addons-864899" [9476c8af-7528-4161-ad0d-216e6daa7240] Running
	I0816 17:43:48.530115  294136 system_pods.go:61] "kube-ingress-dns-minikube" [d970ac4a-dd4d-49ec-8143-b1efc71394e5] Running
	I0816 17:43:48.530119  294136 system_pods.go:61] "kube-proxy-kjxrw" [0ecc5e80-8d3c-49d5-9778-836dce762e8d] Running
	I0816 17:43:48.530125  294136 system_pods.go:61] "kube-scheduler-addons-864899" [6630df07-6718-4b66-b945-db63c48e8b12] Running
	I0816 17:43:48.530131  294136 system_pods.go:61] "metrics-server-8988944d9-7x26w" [b9b0a4cd-3c28-4e65-bf08-6b2a89a4fdde] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0816 17:43:48.530141  294136 system_pods.go:61] "nvidia-device-plugin-daemonset-k9vv2" [1abed4b9-c96b-4ed0-9de5-2035c284afa5] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0816 17:43:48.530147  294136 system_pods.go:61] "registry-6fb4cdfc84-pvhcl" [67686d20-79b3-4c3c-b120-f96dfea3fa24] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0816 17:43:48.530156  294136 system_pods.go:61] "registry-proxy-hftrd" [0e434063-8781-41ea-8c5b-c43130270841] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0816 17:43:48.530165  294136 system_pods.go:61] "snapshot-controller-56fcc65765-6m9j4" [80ee8b08-425f-45ff-978a-d895f79970a3] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0816 17:43:48.530174  294136 system_pods.go:61] "snapshot-controller-56fcc65765-hblt6" [ebf55a1e-84cd-4f98-b07a-bef356a479f4] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0816 17:43:48.530180  294136 system_pods.go:61] "storage-provisioner" [ee16551c-4e29-4d38-860c-ec4637547833] Running
	I0816 17:43:48.530187  294136 system_pods.go:74] duration metric: took 184.590647ms to wait for pod list to return data ...
	I0816 17:43:48.530196  294136 default_sa.go:34] waiting for default service account to be created ...
	I0816 17:43:48.537452  294136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:43:48.723937  294136 default_sa.go:45] found service account: "default"
	I0816 17:43:48.724002  294136 default_sa.go:55] duration metric: took 193.797719ms for default service account to be created ...
	I0816 17:43:48.724027  294136 system_pods.go:116] waiting for k8s-apps to be running ...
	I0816 17:43:48.731189  294136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:43:48.831720  294136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 17:43:48.934481  294136 system_pods.go:86] 18 kube-system pods found
	I0816 17:43:48.934566  294136 system_pods.go:89] "coredns-6f6b679f8f-lls4v" [4d9b5208-3f7d-4748-9b64-d0d51b8a759b] Running
	I0816 17:43:48.934593  294136 system_pods.go:89] "csi-hostpath-attacher-0" [c78df4bb-8609-488d-ada2-9f65b23d5b99] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0816 17:43:48.934629  294136 system_pods.go:89] "csi-hostpath-resizer-0" [b540c0f9-77a8-4718-aec9-fc87bc84aa2f] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0816 17:43:48.934655  294136 system_pods.go:89] "csi-hostpathplugin-qtzcx" [a668f80d-6b61-4306-8db0-6820fd445cee] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0816 17:43:48.934674  294136 system_pods.go:89] "etcd-addons-864899" [aa4a09f3-6eff-400e-a840-29260e895a0c] Running
	I0816 17:43:48.934691  294136 system_pods.go:89] "kindnet-mcs5f" [78fe4c47-0645-40bf-9c17-5b60270e7389] Running
	I0816 17:43:48.934709  294136 system_pods.go:89] "kube-apiserver-addons-864899" [b3da2fd2-d299-4c50-abe3-1b6780c5b123] Running
	I0816 17:43:48.934736  294136 system_pods.go:89] "kube-controller-manager-addons-864899" [9476c8af-7528-4161-ad0d-216e6daa7240] Running
	I0816 17:43:48.934761  294136 system_pods.go:89] "kube-ingress-dns-minikube" [d970ac4a-dd4d-49ec-8143-b1efc71394e5] Running
	I0816 17:43:48.934778  294136 system_pods.go:89] "kube-proxy-kjxrw" [0ecc5e80-8d3c-49d5-9778-836dce762e8d] Running
	I0816 17:43:48.934810  294136 system_pods.go:89] "kube-scheduler-addons-864899" [6630df07-6718-4b66-b945-db63c48e8b12] Running
	I0816 17:43:48.934835  294136 system_pods.go:89] "metrics-server-8988944d9-7x26w" [b9b0a4cd-3c28-4e65-bf08-6b2a89a4fdde] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0816 17:43:48.934856  294136 system_pods.go:89] "nvidia-device-plugin-daemonset-k9vv2" [1abed4b9-c96b-4ed0-9de5-2035c284afa5] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0816 17:43:48.934874  294136 system_pods.go:89] "registry-6fb4cdfc84-pvhcl" [67686d20-79b3-4c3c-b120-f96dfea3fa24] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0816 17:43:48.934907  294136 system_pods.go:89] "registry-proxy-hftrd" [0e434063-8781-41ea-8c5b-c43130270841] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0816 17:43:48.934931  294136 system_pods.go:89] "snapshot-controller-56fcc65765-6m9j4" [80ee8b08-425f-45ff-978a-d895f79970a3] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0816 17:43:48.934951  294136 system_pods.go:89] "snapshot-controller-56fcc65765-hblt6" [ebf55a1e-84cd-4f98-b07a-bef356a479f4] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0816 17:43:48.934966  294136 system_pods.go:89] "storage-provisioner" [ee16551c-4e29-4d38-860c-ec4637547833] Running
	I0816 17:43:48.934985  294136 system_pods.go:126] duration metric: took 210.941486ms to wait for k8s-apps to be running ...
	I0816 17:43:48.935017  294136 system_svc.go:44] waiting for kubelet service to be running ....
	I0816 17:43:48.935125  294136 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0816 17:43:48.955452  294136 system_svc.go:56] duration metric: took 20.428986ms WaitForService to wait for kubelet
	I0816 17:43:48.955478  294136 kubeadm.go:582] duration metric: took 17.84994891s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0816 17:43:48.955506  294136 node_conditions.go:102] verifying NodePressure condition ...
	I0816 17:43:49.037793  294136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:43:49.124140  294136 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0816 17:43:49.124175  294136 node_conditions.go:123] node cpu capacity is 2
	I0816 17:43:49.124187  294136 node_conditions.go:105] duration metric: took 168.675256ms to run NodePressure ...
	I0816 17:43:49.124216  294136 start.go:241] waiting for startup goroutines ...
	I0816 17:43:49.230259  294136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:43:49.301166  294136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 17:43:49.538233  294136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:43:49.730727  294136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:43:49.801718  294136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 17:43:50.037951  294136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:43:50.230422  294136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:43:50.301247  294136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 17:43:50.540813  294136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:43:50.730756  294136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:43:50.819609  294136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 17:43:51.039021  294136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:43:51.231833  294136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:43:51.302242  294136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 17:43:51.537625  294136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:43:51.730834  294136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:43:51.800340  294136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 17:43:52.038231  294136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:43:52.236023  294136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:43:52.300472  294136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 17:43:52.538308  294136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:43:52.730192  294136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:43:52.800515  294136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 17:43:53.037674  294136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:43:53.229907  294136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:43:53.300092  294136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 17:43:53.537569  294136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:43:53.730303  294136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:43:53.800827  294136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 17:43:54.038676  294136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:43:54.230874  294136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:43:54.300617  294136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 17:43:54.538831  294136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:43:54.730599  294136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:43:54.801263  294136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 17:43:55.037958  294136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:43:55.231182  294136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:43:55.300759  294136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 17:43:55.537939  294136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:43:55.730818  294136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:43:55.802053  294136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 17:43:56.038905  294136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:43:56.231053  294136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:43:56.300480  294136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 17:43:56.538338  294136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:43:56.730829  294136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:43:56.800527  294136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 17:43:57.041919  294136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:43:57.230858  294136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:43:57.301434  294136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 17:43:57.537727  294136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:43:57.731978  294136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:43:57.831688  294136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 17:43:58.037989  294136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:43:58.231092  294136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:43:58.300640  294136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 17:43:58.538164  294136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:43:58.730862  294136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:43:58.831156  294136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 17:43:59.037547  294136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:43:59.230099  294136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:43:59.300672  294136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 17:43:59.536861  294136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:43:59.730273  294136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:43:59.800809  294136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 17:44:00.062269  294136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:44:00.265642  294136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:44:00.329858  294136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 17:44:00.541270  294136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:44:00.743629  294136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:44:00.820105  294136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 17:44:01.038961  294136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:44:01.232003  294136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:44:01.301436  294136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 17:44:01.537918  294136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:44:01.730899  294136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:44:01.800661  294136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 17:44:02.037703  294136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:44:02.230939  294136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:44:02.301267  294136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 17:44:02.537760  294136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:44:02.730907  294136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:44:02.800685  294136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 17:44:03.038994  294136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:44:03.233252  294136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:44:03.332560  294136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0816 17:44:03.537885  294136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:44:03.731632  294136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:44:03.801063  294136 kapi.go:107] duration metric: took 21.504156599s to wait for kubernetes.io/minikube-addons=registry ...
	I0816 17:44:04.037943  294136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:44:04.232775  294136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:44:04.537348  294136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:44:04.730724  294136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:44:05.040927  294136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:44:05.231364  294136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:44:05.538307  294136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:44:05.730972  294136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:44:06.038788  294136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:44:06.230309  294136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:44:06.537926  294136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:44:06.731231  294136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:44:07.038237  294136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:44:07.231101  294136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:44:07.539512  294136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:44:07.731553  294136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:44:08.038640  294136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:44:08.233740  294136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:44:08.537788  294136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:44:08.729900  294136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:44:09.039130  294136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:44:09.231886  294136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:44:09.537610  294136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:44:09.730429  294136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:44:10.038713  294136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:44:10.230947  294136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:44:10.537229  294136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:44:10.730577  294136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:44:11.037084  294136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:44:11.230634  294136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:44:11.536936  294136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:44:11.730597  294136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:44:12.039590  294136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:44:12.234408  294136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:44:12.541692  294136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:44:12.733606  294136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:44:13.039129  294136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:44:13.238371  294136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:44:13.537513  294136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:44:13.732113  294136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:44:14.054471  294136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:44:14.230920  294136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:44:14.538533  294136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:44:14.739422  294136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:44:15.047045  294136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:44:15.231474  294136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:44:15.537563  294136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:44:15.735776  294136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:44:16.037204  294136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:44:16.231317  294136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:44:16.538223  294136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:44:16.730699  294136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:44:17.038333  294136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:44:17.233710  294136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:44:17.537560  294136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:44:17.734170  294136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:44:18.066178  294136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:44:18.234418  294136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:44:18.539051  294136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:44:18.731365  294136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:44:19.037116  294136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:44:19.229873  294136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:44:19.537440  294136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:44:19.730419  294136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:44:20.037683  294136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:44:20.232838  294136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:44:20.537527  294136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:44:20.730309  294136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:44:21.037416  294136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:44:21.231144  294136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:44:21.538962  294136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:44:21.730431  294136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:44:22.037111  294136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:44:22.231170  294136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:44:22.537224  294136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:44:22.729858  294136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:44:23.037605  294136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:44:23.230970  294136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:44:23.537851  294136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:44:23.731619  294136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:44:24.037495  294136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:44:24.231308  294136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:44:24.537623  294136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:44:24.730905  294136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:44:25.037230  294136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:44:25.231567  294136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:44:25.538209  294136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:44:25.731112  294136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:44:26.037789  294136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:44:26.230953  294136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:44:26.539014  294136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:44:26.731208  294136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:44:27.038048  294136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:44:27.231124  294136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:44:27.537938  294136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:44:27.731159  294136 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0816 17:44:28.038269  294136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:44:28.235328  294136 kapi.go:107] duration metric: took 48.00968743s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0816 17:44:28.539313  294136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:44:29.037753  294136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:44:29.539287  294136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:44:30.043644  294136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:44:30.538429  294136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:44:31.038649  294136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:44:31.537907  294136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:44:32.038378  294136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:44:32.538210  294136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:44:33.037684  294136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:44:33.537341  294136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:44:34.038028  294136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:44:34.537450  294136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:44:35.038446  294136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:44:35.538154  294136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0816 17:44:36.037486  294136 kapi.go:107] duration metric: took 53.005104813s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0816 17:45:06.370191  294136 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0816 17:45:06.370221  294136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:45:06.866370  294136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:45:07.365833  294136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:45:07.865542  294136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:45:08.366802  294136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:45:08.865898  294136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:45:09.366712  294136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:45:09.865595  294136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:45:10.366611  294136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:45:10.866756  294136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:45:11.365855  294136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:45:11.865475  294136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:45:12.365609  294136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:45:12.866293  294136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:45:13.365253  294136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:45:13.865844  294136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:45:14.366526  294136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:45:14.866453  294136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:45:15.366271  294136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:45:15.865466  294136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:45:16.366195  294136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:45:16.866072  294136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:45:17.366737  294136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:45:17.865711  294136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:45:18.365701  294136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:45:18.865715  294136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:45:19.366651  294136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:45:19.865757  294136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:45:20.365963  294136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:45:20.866068  294136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:45:21.366025  294136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:45:21.866098  294136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:45:22.365879  294136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:45:22.865955  294136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:45:23.365663  294136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:45:23.866272  294136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:45:24.365989  294136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:45:24.866393  294136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:45:25.365816  294136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:45:25.865197  294136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:45:26.366138  294136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:45:26.866961  294136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:45:27.365859  294136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:45:27.866105  294136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:45:28.365300  294136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:45:28.866540  294136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:45:29.400127  294136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:45:29.866010  294136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:45:30.366528  294136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:45:30.866042  294136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:45:31.366351  294136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:45:31.866223  294136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:45:32.366015  294136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:45:32.865807  294136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:45:33.365612  294136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:45:33.866772  294136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:45:34.365656  294136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:45:34.865613  294136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:45:35.366858  294136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:45:35.866147  294136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:45:36.365513  294136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:45:36.865622  294136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:45:37.366181  294136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:45:37.865560  294136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:45:38.366831  294136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:45:38.865757  294136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:45:39.366775  294136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:45:39.865403  294136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:45:40.366159  294136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:45:40.866180  294136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:45:41.366232  294136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:45:41.865337  294136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:45:42.367621  294136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:45:42.866279  294136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:45:43.365473  294136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:45:43.866800  294136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:45:44.367860  294136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:45:44.866103  294136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:45:45.366539  294136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:45:45.866092  294136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:45:46.365454  294136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:45:46.866362  294136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:45:47.366066  294136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:45:47.866352  294136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:45:48.366643  294136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:45:48.866899  294136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:45:49.366047  294136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:45:49.865965  294136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:45:50.366003  294136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:45:50.866550  294136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:45:51.366172  294136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:45:51.866914  294136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:45:52.365948  294136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:45:52.865704  294136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:45:53.365778  294136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:45:53.866285  294136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:45:54.366004  294136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:45:54.866199  294136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:45:55.365865  294136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:45:55.865411  294136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:45:56.366853  294136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:45:56.866004  294136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:45:57.365676  294136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:45:57.865660  294136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:45:58.365417  294136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:45:58.866267  294136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:45:59.365484  294136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:45:59.866776  294136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:46:00.365615  294136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:46:00.865635  294136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:46:01.365418  294136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:46:01.866504  294136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:46:02.366101  294136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:46:02.866544  294136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:46:03.366303  294136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:46:03.866337  294136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:46:04.365525  294136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:46:04.866137  294136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:46:05.366426  294136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:46:05.865359  294136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:46:06.366372  294136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:46:06.866089  294136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:46:07.365653  294136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:46:07.866452  294136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:46:08.365531  294136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:46:08.866658  294136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:46:09.365469  294136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:46:09.865395  294136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:46:10.366349  294136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:46:10.867088  294136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:46:11.366742  294136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:46:11.865682  294136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:46:12.367211  294136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:46:12.866488  294136 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0816 17:46:13.365578  294136 kapi.go:107] duration metric: took 2m29.003455605s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0816 17:46:13.368494  294136 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-864899 cluster.
	I0816 17:46:13.371462  294136 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0816 17:46:13.373615  294136 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0816 17:46:13.376400  294136 out.go:177] * Enabled addons: default-storageclass, nvidia-device-plugin, storage-provisioner, cloud-spanner, storage-provisioner-rancher, volcano, ingress-dns, metrics-server, inspektor-gadget, yakd, volumesnapshots, registry, ingress, csi-hostpath-driver, gcp-auth
	I0816 17:46:13.379061  294136 addons.go:510] duration metric: took 2m42.273097924s for enable addons: enabled=[default-storageclass nvidia-device-plugin storage-provisioner cloud-spanner storage-provisioner-rancher volcano ingress-dns metrics-server inspektor-gadget yakd volumesnapshots registry ingress csi-hostpath-driver gcp-auth]
	I0816 17:46:13.379106  294136 start.go:246] waiting for cluster config update ...
	I0816 17:46:13.379128  294136 start.go:255] writing updated cluster config ...
	I0816 17:46:13.379426  294136 ssh_runner.go:195] Run: rm -f paused
	I0816 17:46:13.736266  294136 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0816 17:46:13.739604  294136 out.go:177] * Done! kubectl is now configured to use "addons-864899" cluster and "default" namespace by default
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                                     ATTEMPT             POD ID              POD
	0664fe9de8aae       e2d3313f65753       2 minutes ago       Exited              gadget                                   5                   11099b71bd022       gadget-tcnmn
	14dc3f65e39e9       6ef582f3ec844       3 minutes ago       Running             gcp-auth                                 0                   aba524e236596       gcp-auth-89d5ffd79-gpbrj
	9f59dbfb8aace       8b46b1cd48760       4 minutes ago       Running             admission                                0                   997d8fc03ccf4       volcano-admission-77d7d48b68-2f46h
	9e91dca7bd149       ee6d597e62dc8       4 minutes ago       Running             csi-snapshotter                          0                   1e2cf52520b88       csi-hostpathplugin-qtzcx
	084dfd66ebaf4       642ded511e141       4 minutes ago       Running             csi-provisioner                          0                   1e2cf52520b88       csi-hostpathplugin-qtzcx
	30b7b2b4c41f1       922312104da8a       5 minutes ago       Running             liveness-probe                           0                   1e2cf52520b88       csi-hostpathplugin-qtzcx
	3edb9afe7c0e7       08f6b2990811a       5 minutes ago       Running             hostpath                                 0                   1e2cf52520b88       csi-hostpathplugin-qtzcx
	d1ebd253ec9fb       0107d56dbc0be       5 minutes ago       Running             node-driver-registrar                    0                   1e2cf52520b88       csi-hostpathplugin-qtzcx
	22166e0d380de       289a818c8d9c5       5 minutes ago       Running             controller                               0                   0d5d8c504e29c       ingress-nginx-controller-bc57996ff-w4vnd
	827bb8260e8e9       1461903ec4fe9       5 minutes ago       Running             csi-external-health-monitor-controller   0                   1e2cf52520b88       csi-hostpathplugin-qtzcx
	9fc3d38fde586       487fa743e1e22       5 minutes ago       Running             csi-resizer                              0                   3c962879a3632       csi-hostpath-resizer-0
	73d54c7c0d19e       9a80d518f102c       5 minutes ago       Running             csi-attacher                             0                   648d2f57ad065       csi-hostpath-attacher-0
	40d976d38f80e       420193b27261a       5 minutes ago       Exited              patch                                    0                   16b7d406f88a5       ingress-nginx-admission-patch-vd7pb
	554881eceef49       1505f556b3a7b       5 minutes ago       Running             volcano-controllers                      0                   213e60007927d       volcano-controllers-56675bb4d5-kdtv4
	cbf6ba7565ad5       4d1e5c3e97420       5 minutes ago       Running             volume-snapshot-controller               0                   785b06e102bf7       snapshot-controller-56fcc65765-hblt6
	da02c11fa4a73       d9c7ad4c226bf       5 minutes ago       Running             volcano-scheduler                        0                   db8b76fdb37ad       volcano-scheduler-576bc46687-jz6d7
	45f4148961806       4d1e5c3e97420       5 minutes ago       Running             volume-snapshot-controller               0                   e4d20e4cc6680       snapshot-controller-56fcc65765-6m9j4
	59ea941bfedca       95dccb4df54ab       5 minutes ago       Running             metrics-server                           0                   b2854d6f88d3e       metrics-server-8988944d9-7x26w
	cabec07f39799       420193b27261a       5 minutes ago       Exited              create                                   0                   bfee19d4febad       ingress-nginx-admission-create-zpv9t
	08763197ad44b       7ce2150c8929b       5 minutes ago       Running             local-path-provisioner                   0                   07ef3f5afce6b       local-path-provisioner-86d989889c-2prk7
	284418d05c49c       77bdba588b953       5 minutes ago       Running             yakd                                     0                   2a50d870df250       yakd-dashboard-67d98fc6b-b44t7
	b48234f8760b8       3410e1561990a       5 minutes ago       Running             registry-proxy                           0                   aa8b3ccc26384       registry-proxy-hftrd
	252269d3b88b2       6fed88f43b276       5 minutes ago       Running             registry                                 0                   6a5750ed5e86f       registry-6fb4cdfc84-pvhcl
	2cebe31aa30d0       53af6e2c4c343       5 minutes ago       Running             cloud-spanner-emulator                   0                   4bd02e3c34803       cloud-spanner-emulator-c4bc9b5f8-6np5w
	0d5f70f0dfbae       a9bac31a5be8d       5 minutes ago       Running             nvidia-device-plugin-ctr                 0                   fabb42d68be47       nvidia-device-plugin-daemonset-k9vv2
	b0990e9e65a74       2437cf7621777       5 minutes ago       Running             coredns                                  0                   7a53bc3da2d89       coredns-6f6b679f8f-lls4v
	85fcab11f791c       35508c2f890c4       5 minutes ago       Running             minikube-ingress-dns                     0                   0e9073ef4dceb       kube-ingress-dns-minikube
	6aea42a6df509       ba04bb24b9575       5 minutes ago       Running             storage-provisioner                      0                   71ca8ef378a20       storage-provisioner
	fc6344743ae93       6a23fa8fd2b78       5 minutes ago       Running             kindnet-cni                              0                   e1f16174e5ff1       kindnet-mcs5f
	f7a1610f3e767       71d55d66fd4ee       6 minutes ago       Running             kube-proxy                               0                   5b13b590e7aac       kube-proxy-kjxrw
	4c5d7397adeba       cd0f0ae0ec9e0       6 minutes ago       Running             kube-apiserver                           0                   e06936a6e95bc       kube-apiserver-addons-864899
	d082ca7c776dc       27e3830e14027       6 minutes ago       Running             etcd                                     0                   a366f2ce67ec3       etcd-addons-864899
	24f36c489fe04       fbbbd428abb4d       6 minutes ago       Running             kube-scheduler                           0                   a78b8cc50d266       kube-scheduler-addons-864899
	682980ddf02ed       fcb0683e6bdbd       6 minutes ago       Running             kube-controller-manager                  0                   5c7913c187df5       kube-controller-manager-addons-864899
	
	
	==> containerd <==
	Aug 16 17:46:26 addons-864899 containerd[818]: time="2024-08-16T17:46:26.422422143Z" level=info msg="RemovePodSandbox \"7fed21dc55d4447d296422fcb470416c3e6e42e514389499f1d410be66e604da\" returns successfully"
	Aug 16 17:47:23 addons-864899 containerd[818]: time="2024-08-16T17:47:23.370069045Z" level=info msg="PullImage \"ghcr.io/inspektor-gadget/inspektor-gadget:v0.31.0@sha256:6b2f7ac9fe6f547cfa541d9217f03da0d0c4615b561d5455a23d0edbbd607ecc\""
	Aug 16 17:47:23 addons-864899 containerd[818]: time="2024-08-16T17:47:23.501582985Z" level=info msg="ImageUpdate event name:\"ghcr.io/inspektor-gadget/inspektor-gadget@sha256:6b2f7ac9fe6f547cfa541d9217f03da0d0c4615b561d5455a23d0edbbd607ecc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Aug 16 17:47:23 addons-864899 containerd[818]: time="2024-08-16T17:47:23.502228398Z" level=info msg="stop pulling image ghcr.io/inspektor-gadget/inspektor-gadget@sha256:6b2f7ac9fe6f547cfa541d9217f03da0d0c4615b561d5455a23d0edbbd607ecc: active requests=0, bytes read=89"
	Aug 16 17:47:23 addons-864899 containerd[818]: time="2024-08-16T17:47:23.508440677Z" level=info msg="Pulled image \"ghcr.io/inspektor-gadget/inspektor-gadget:v0.31.0@sha256:6b2f7ac9fe6f547cfa541d9217f03da0d0c4615b561d5455a23d0edbbd607ecc\" with image id \"sha256:e2d3313f65753f82428cf312f6e4b9237983de19680bde57ca1c0935cadbe630\", repo tag \"\", repo digest \"ghcr.io/inspektor-gadget/inspektor-gadget@sha256:6b2f7ac9fe6f547cfa541d9217f03da0d0c4615b561d5455a23d0edbbd607ecc\", size \"69907666\" in 138.316305ms"
	Aug 16 17:47:23 addons-864899 containerd[818]: time="2024-08-16T17:47:23.508632324Z" level=info msg="PullImage \"ghcr.io/inspektor-gadget/inspektor-gadget:v0.31.0@sha256:6b2f7ac9fe6f547cfa541d9217f03da0d0c4615b561d5455a23d0edbbd607ecc\" returns image reference \"sha256:e2d3313f65753f82428cf312f6e4b9237983de19680bde57ca1c0935cadbe630\""
	Aug 16 17:47:23 addons-864899 containerd[818]: time="2024-08-16T17:47:23.511089855Z" level=info msg="CreateContainer within sandbox \"11099b71bd022b272f753bc7df9caf0cd2660975ed762e62c3c1811b7fc1f8fc\" for container &ContainerMetadata{Name:gadget,Attempt:5,}"
	Aug 16 17:47:23 addons-864899 containerd[818]: time="2024-08-16T17:47:23.532145316Z" level=info msg="CreateContainer within sandbox \"11099b71bd022b272f753bc7df9caf0cd2660975ed762e62c3c1811b7fc1f8fc\" for &ContainerMetadata{Name:gadget,Attempt:5,} returns container id \"0664fe9de8aae822ff0e67a02eb633ccfb6985e6f8f8209eb8ec1e91285f1c48\""
	Aug 16 17:47:23 addons-864899 containerd[818]: time="2024-08-16T17:47:23.532860333Z" level=info msg="StartContainer for \"0664fe9de8aae822ff0e67a02eb633ccfb6985e6f8f8209eb8ec1e91285f1c48\""
	Aug 16 17:47:23 addons-864899 containerd[818]: time="2024-08-16T17:47:23.595540820Z" level=info msg="StartContainer for \"0664fe9de8aae822ff0e67a02eb633ccfb6985e6f8f8209eb8ec1e91285f1c48\" returns successfully"
	Aug 16 17:47:24 addons-864899 containerd[818]: time="2024-08-16T17:47:24.823968053Z" level=info msg="shim disconnected" id=0664fe9de8aae822ff0e67a02eb633ccfb6985e6f8f8209eb8ec1e91285f1c48 namespace=k8s.io
	Aug 16 17:47:24 addons-864899 containerd[818]: time="2024-08-16T17:47:24.824036138Z" level=warning msg="cleaning up after shim disconnected" id=0664fe9de8aae822ff0e67a02eb633ccfb6985e6f8f8209eb8ec1e91285f1c48 namespace=k8s.io
	Aug 16 17:47:24 addons-864899 containerd[818]: time="2024-08-16T17:47:24.824047560Z" level=info msg="cleaning up dead shim" namespace=k8s.io
	Aug 16 17:47:25 addons-864899 containerd[818]: time="2024-08-16T17:47:25.447995200Z" level=info msg="RemoveContainer for \"4480d96a13e531e7d8ffa69bbe5658f857e443a861cef88f168ad57f12e19ca9\""
	Aug 16 17:47:25 addons-864899 containerd[818]: time="2024-08-16T17:47:25.454344989Z" level=info msg="RemoveContainer for \"4480d96a13e531e7d8ffa69bbe5658f857e443a861cef88f168ad57f12e19ca9\" returns successfully"
	Aug 16 17:47:26 addons-864899 containerd[818]: time="2024-08-16T17:47:26.426002307Z" level=info msg="RemoveContainer for \"1a5023da9b9cd2c65dcea975878690f266e8cd1a2a24f1be640b3e42bdc580b9\""
	Aug 16 17:47:26 addons-864899 containerd[818]: time="2024-08-16T17:47:26.431978994Z" level=info msg="RemoveContainer for \"1a5023da9b9cd2c65dcea975878690f266e8cd1a2a24f1be640b3e42bdc580b9\" returns successfully"
	Aug 16 17:47:26 addons-864899 containerd[818]: time="2024-08-16T17:47:26.434071710Z" level=info msg="StopPodSandbox for \"d7ec72ba204553769d5724dd3a439bb5d66ea1327e6a22bef59fba233d3b68f4\""
	Aug 16 17:47:26 addons-864899 containerd[818]: time="2024-08-16T17:47:26.441371206Z" level=info msg="TearDown network for sandbox \"d7ec72ba204553769d5724dd3a439bb5d66ea1327e6a22bef59fba233d3b68f4\" successfully"
	Aug 16 17:47:26 addons-864899 containerd[818]: time="2024-08-16T17:47:26.441410450Z" level=info msg="StopPodSandbox for \"d7ec72ba204553769d5724dd3a439bb5d66ea1327e6a22bef59fba233d3b68f4\" returns successfully"
	Aug 16 17:47:26 addons-864899 containerd[818]: time="2024-08-16T17:47:26.441823987Z" level=info msg="RemovePodSandbox for \"d7ec72ba204553769d5724dd3a439bb5d66ea1327e6a22bef59fba233d3b68f4\""
	Aug 16 17:47:26 addons-864899 containerd[818]: time="2024-08-16T17:47:26.441858243Z" level=info msg="Forcibly stopping sandbox \"d7ec72ba204553769d5724dd3a439bb5d66ea1327e6a22bef59fba233d3b68f4\""
	Aug 16 17:47:26 addons-864899 containerd[818]: time="2024-08-16T17:47:26.454454158Z" level=info msg="TearDown network for sandbox \"d7ec72ba204553769d5724dd3a439bb5d66ea1327e6a22bef59fba233d3b68f4\" successfully"
	Aug 16 17:47:26 addons-864899 containerd[818]: time="2024-08-16T17:47:26.460414885Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d7ec72ba204553769d5724dd3a439bb5d66ea1327e6a22bef59fba233d3b68f4\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus."
	Aug 16 17:47:26 addons-864899 containerd[818]: time="2024-08-16T17:47:26.461078407Z" level=info msg="RemovePodSandbox \"d7ec72ba204553769d5724dd3a439bb5d66ea1327e6a22bef59fba233d3b68f4\" returns successfully"
	
	
	==> coredns [b0990e9e65a7424bbe991114384db6df8ac06b8979e1cfaef3fdfbfb2a469047] <==
	[INFO] 10.244.0.5:43105 - 53790 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000068997s
	[INFO] 10.244.0.5:49432 - 26715 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.002493199s
	[INFO] 10.244.0.5:49432 - 32093 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.002566495s
	[INFO] 10.244.0.5:42253 - 53106 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000184984s
	[INFO] 10.244.0.5:42253 - 24438 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000240934s
	[INFO] 10.244.0.5:53065 - 17669 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000111671s
	[INFO] 10.244.0.5:53065 - 22842 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000056558s
	[INFO] 10.244.0.5:53947 - 14255 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000053259s
	[INFO] 10.244.0.5:53947 - 51665 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000037948s
	[INFO] 10.244.0.5:41194 - 18091 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000058125s
	[INFO] 10.244.0.5:41194 - 31145 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000030285s
	[INFO] 10.244.0.5:45465 - 2124 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001285344s
	[INFO] 10.244.0.5:45465 - 27982 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001234333s
	[INFO] 10.244.0.5:39252 - 9340 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000064418s
	[INFO] 10.244.0.5:39252 - 52851 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000039679s
	[INFO] 10.244.0.24:51694 - 2891 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.007366761s
	[INFO] 10.244.0.24:56171 - 38150 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.007362978s
	[INFO] 10.244.0.24:39824 - 7027 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000132742s
	[INFO] 10.244.0.24:44041 - 64601 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000079852s
	[INFO] 10.244.0.24:38592 - 53840 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.00029901s
	[INFO] 10.244.0.24:37012 - 30718 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000119425s
	[INFO] 10.244.0.24:36764 - 30594 "AAAA IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.00279472s
	[INFO] 10.244.0.24:36146 - 61219 "A IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.005011243s
	[INFO] 10.244.0.24:45700 - 8487 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 648 0.001419883s
	[INFO] 10.244.0.24:58241 - 28609 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.001405753s
	
	
	==> describe nodes <==
	Name:               addons-864899
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-864899
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=8789c54b9bc6db8e66c461a83302d5a0be0abbdd
	                    minikube.k8s.io/name=addons-864899
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_16T17_43_27_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-864899
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-864899"}
	                    kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 16 Aug 2024 17:43:23 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-864899
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 16 Aug 2024 17:49:23 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 16 Aug 2024 17:46:30 +0000   Fri, 16 Aug 2024 17:43:20 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 16 Aug 2024 17:46:30 +0000   Fri, 16 Aug 2024 17:43:20 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 16 Aug 2024 17:46:30 +0000   Fri, 16 Aug 2024 17:43:20 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 16 Aug 2024 17:46:30 +0000   Fri, 16 Aug 2024 17:43:23 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-864899
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022364Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022364Ki
	  pods:               110
	System Info:
	  Machine ID:                 5ac051db66494860961e2847297e84e1
	  System UUID:                2523c80a-07d5-4ab2-b049-b44f710bb48e
	  Boot ID:                    6cf3c121-8478-4b33-820f-e176429c0afc
	  Kernel Version:             5.15.0-1067-aws
	  OS Image:                   Ubuntu 22.04.4 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  containerd://1.7.20
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (27 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     cloud-spanner-emulator-c4bc9b5f8-6np5w      0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m58s
	  gadget                      gadget-tcnmn                                0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m54s
	  gcp-auth                    gcp-auth-89d5ffd79-gpbrj                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m26s
	  ingress-nginx               ingress-nginx-controller-bc57996ff-w4vnd    100m (5%)     0 (0%)      90Mi (1%)        0 (0%)         5m52s
	  kube-system                 coredns-6f6b679f8f-lls4v                    100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     6m1s
	  kube-system                 csi-hostpath-attacher-0                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m50s
	  kube-system                 csi-hostpath-resizer-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m49s
	  kube-system                 csi-hostpathplugin-qtzcx                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m50s
	  kube-system                 etcd-addons-864899                          100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         6m6s
	  kube-system                 kindnet-mcs5f                               100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      6m1s
	  kube-system                 kube-apiserver-addons-864899                250m (12%)    0 (0%)      0 (0%)           0 (0%)         6m6s
	  kube-system                 kube-controller-manager-addons-864899       200m (10%)    0 (0%)      0 (0%)           0 (0%)         6m6s
	  kube-system                 kube-ingress-dns-minikube                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m57s
	  kube-system                 kube-proxy-kjxrw                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m1s
	  kube-system                 kube-scheduler-addons-864899                100m (5%)     0 (0%)      0 (0%)           0 (0%)         6m6s
	  kube-system                 metrics-server-8988944d9-7x26w              100m (5%)     0 (0%)      200Mi (2%)       0 (0%)         5m56s
	  kube-system                 nvidia-device-plugin-daemonset-k9vv2        0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m58s
	  kube-system                 registry-6fb4cdfc84-pvhcl                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m57s
	  kube-system                 registry-proxy-hftrd                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m57s
	  kube-system                 snapshot-controller-56fcc65765-6m9j4        0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m54s
	  kube-system                 snapshot-controller-56fcc65765-hblt6        0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m54s
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m56s
	  local-path-storage          local-path-provisioner-86d989889c-2prk7     0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m55s
	  volcano-system              volcano-admission-77d7d48b68-2f46h          0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m52s
	  volcano-system              volcano-controllers-56675bb4d5-kdtv4        0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m51s
	  volcano-system              volcano-scheduler-576bc46687-jz6d7          0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m51s
	  yakd-dashboard              yakd-dashboard-67d98fc6b-b44t7              0 (0%)        0 (0%)      128Mi (1%)       256Mi (3%)     5m55s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1050m (52%)  100m (5%)
	  memory             638Mi (8%)   476Mi (6%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	  hugepages-32Mi     0 (0%)       0 (0%)
	  hugepages-64Ki     0 (0%)       0 (0%)
	Events:
	  Type     Reason                   Age    From             Message
	  ----     ------                   ----   ----             -------
	  Normal   Starting                 5m59s  kube-proxy       
	  Normal   Starting                 6m6s   kubelet          Starting kubelet.
	  Warning  CgroupV1                 6m6s   kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeAllocatableEnforced  6m6s   kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  6m6s   kubelet          Node addons-864899 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    6m6s   kubelet          Node addons-864899 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     6m6s   kubelet          Node addons-864899 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           6m2s   node-controller  Node addons-864899 event: Registered Node addons-864899 in Controller
	
	
	==> dmesg <==
	[Aug16 16:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.014352] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.475030] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.058285] systemd[1]: /lib/systemd/system/cloud-init-local.service:15: Unknown key name 'ConditionEnvironment' in section 'Unit', ignoring.
	[  +0.002346] systemd[1]: /lib/systemd/system/cloud-init.service:19: Unknown key name 'ConditionEnvironment' in section 'Unit', ignoring.
	[  +0.016108] systemd[1]: /lib/systemd/system/cloud-init.target:15: Unknown key name 'ConditionEnvironment' in section 'Unit', ignoring.
	[  +0.003740] systemd[1]: /lib/systemd/system/cloud-final.service:9: Unknown key name 'ConditionEnvironment' in section 'Unit', ignoring.
	[  +0.003933] systemd[1]: /lib/systemd/system/cloud-config.service:8: Unknown key name 'ConditionEnvironment' in section 'Unit', ignoring.
	[  +0.648870] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.745706] kauditd_printk_skb: 36 callbacks suppressed
	[Aug16 16:39] hrtimer: interrupt took 17228644 ns
	[Aug16 17:13] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	
	
	==> etcd [d082ca7c776dcea5d270d79ff6aa4eaa7c5286e8bd89c70f5d401979c6c6a774] <==
	{"level":"info","ts":"2024-08-16T17:43:20.117723Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-08-16T17:43:20.118083Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"aec36adc501070cc","initial-advertise-peer-urls":["https://192.168.49.2:2380"],"listen-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.49.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-08-16T17:43:20.118196Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-08-16T17:43:20.118655Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2024-08-16T17:43:20.118758Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2024-08-16T17:43:20.389040Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc is starting a new election at term 1"}
	{"level":"info","ts":"2024-08-16T17:43:20.389155Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 1"}
	{"level":"info","ts":"2024-08-16T17:43:20.389191Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 1"}
	{"level":"info","ts":"2024-08-16T17:43:20.389247Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 2"}
	{"level":"info","ts":"2024-08-16T17:43:20.389317Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2"}
	{"level":"info","ts":"2024-08-16T17:43:20.389361Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 2"}
	{"level":"info","ts":"2024-08-16T17:43:20.389451Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2"}
	{"level":"info","ts":"2024-08-16T17:43:20.397165Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:addons-864899 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2024-08-16T17:43:20.397502Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-16T17:43:20.397920Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-16T17:43:20.401007Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-16T17:43:20.405697Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-16T17:43:20.406787Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2024-08-16T17:43:20.417020Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-08-16T17:43:20.419100Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-08-16T17:43:20.417683Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-16T17:43:20.418454Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-16T17:43:20.421271Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-16T17:43:20.421373Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-16T17:43:20.422385Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> gcp-auth [14dc3f65e39e999d9c177dadff522f415d951a46d892ca5505dac2485d76d3d9] <==
	2024/08/16 17:46:12 GCP Auth Webhook started!
	2024/08/16 17:46:30 Ready to marshal response ...
	2024/08/16 17:46:30 Ready to write response ...
	2024/08/16 17:46:30 Ready to marshal response ...
	2024/08/16 17:46:30 Ready to write response ...
	
	
	==> kernel <==
	 17:49:32 up  1:31,  0 users,  load average: 0.15, 1.36, 2.25
	Linux addons-864899 5.15.0-1067-aws #73~20.04.1-Ubuntu SMP Wed Jul 24 17:31:05 UTC 2024 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.4 LTS"
	
	
	==> kindnet [fc6344743ae93a80aefd619de373b5a7acb8e6c20f8ed1bcd1644991f542fe7d] <==
	E0816 17:48:17.657732       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "pods" in API group "" at the cluster scope
	W0816 17:48:24.497279       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "namespaces" in API group "" at the cluster scope
	E0816 17:48:24.497312       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "namespaces" in API group "" at the cluster scope
	I0816 17:48:24.717891       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0816 17:48:24.717928       1 main.go:299] handling current node
	I0816 17:48:34.717480       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0816 17:48:34.717521       1 main.go:299] handling current node
	W0816 17:48:38.831039       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: failed to list *v1.NetworkPolicy: networkpolicies.networking.k8s.io is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "networkpolicies" in API group "networking.k8s.io" at the cluster scope
	E0816 17:48:38.831074       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.NetworkPolicy: failed to list *v1.NetworkPolicy: networkpolicies.networking.k8s.io is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "networkpolicies" in API group "networking.k8s.io" at the cluster scope
	I0816 17:48:44.717757       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0816 17:48:44.717794       1 main.go:299] handling current node
	W0816 17:48:48.076737       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: failed to list *v1.Pod: pods is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "pods" in API group "" at the cluster scope
	E0816 17:48:48.076776       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "pods" in API group "" at the cluster scope
	I0816 17:48:54.718386       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0816 17:48:54.718420       1 main.go:299] handling current node
	I0816 17:49:04.717472       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0816 17:49:04.717519       1 main.go:299] handling current node
	W0816 17:49:04.730429       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "namespaces" in API group "" at the cluster scope
	E0816 17:49:04.730464       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "namespaces" in API group "" at the cluster scope
	I0816 17:49:14.718169       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0816 17:49:14.718202       1 main.go:299] handling current node
	W0816 17:49:17.365107       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: failed to list *v1.NetworkPolicy: networkpolicies.networking.k8s.io is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "networkpolicies" in API group "networking.k8s.io" at the cluster scope
	E0816 17:49:17.365143       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.NetworkPolicy: failed to list *v1.NetworkPolicy: networkpolicies.networking.k8s.io is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "networkpolicies" in API group "networking.k8s.io" at the cluster scope
	I0816 17:49:24.717885       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0816 17:49:24.717920       1 main.go:299] handling current node
	
	
	==> kube-apiserver [4c5d7397adebafc5943b82239314e534e5389254d5c041acc068912ff61d2e2f] <==
	W0816 17:44:41.667004       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.106.216.16:443: connect: connection refused
	W0816 17:44:42.688047       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.106.216.16:443: connect: connection refused
	W0816 17:44:43.721574       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.106.216.16:443: connect: connection refused
	W0816 17:44:44.814038       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.106.216.16:443: connect: connection refused
	W0816 17:44:45.818601       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.106.216.16:443: connect: connection refused
	W0816 17:44:46.857608       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.106.216.16:443: connect: connection refused
	W0816 17:44:47.286471       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.107.161.163:443: connect: connection refused
	E0816 17:44:47.286512       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.107.161.163:443: connect: connection refused" logger="UnhandledError"
	W0816 17:44:47.288468       1 dispatcher.go:225] Failed calling webhook, failing closed mutatepod.volcano.sh: failed calling webhook "mutatepod.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/pods/mutate?timeout=10s": dial tcp 10.106.216.16:443: connect: connection refused
	W0816 17:44:47.373999       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.107.161.163:443: connect: connection refused
	E0816 17:44:47.374196       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.107.161.163:443: connect: connection refused" logger="UnhandledError"
	W0816 17:44:47.375868       1 dispatcher.go:225] Failed calling webhook, failing closed mutatepod.volcano.sh: failed calling webhook "mutatepod.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/pods/mutate?timeout=10s": dial tcp 10.106.216.16:443: connect: connection refused
	W0816 17:44:47.898955       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.106.216.16:443: connect: connection refused
	W0816 17:44:48.989424       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.106.216.16:443: connect: connection refused
	W0816 17:44:50.016733       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.106.216.16:443: connect: connection refused
	W0816 17:44:51.057826       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.106.216.16:443: connect: connection refused
	W0816 17:44:52.079892       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.106.216.16:443: connect: connection refused
	W0816 17:45:06.303298       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.107.161.163:443: connect: connection refused
	E0816 17:45:06.303339       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.107.161.163:443: connect: connection refused" logger="UnhandledError"
	W0816 17:45:47.298024       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.107.161.163:443: connect: connection refused
	E0816 17:45:47.298070       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.107.161.163:443: connect: connection refused" logger="UnhandledError"
	W0816 17:45:47.381476       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.107.161.163:443: connect: connection refused
	E0816 17:45:47.381522       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.107.161.163:443: connect: connection refused" logger="UnhandledError"
	I0816 17:46:30.354205       1 controller.go:615] quota admission added evaluator for: jobs.batch.volcano.sh
	I0816 17:46:30.395247       1 controller.go:615] quota admission added evaluator for: podgroups.scheduling.volcano.sh
	
	
	==> kube-controller-manager [682980ddf02eda5b2c16bc4efe71f227b15278ef437c9c68701b9a03619d2c83] <==
	I0816 17:45:47.406780       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I0816 17:45:47.418145       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I0816 17:45:48.168424       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I0816 17:45:48.190278       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="1s"
	I0816 17:45:48.319751       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="1s"
	I0816 17:45:49.187263       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I0816 17:45:49.199013       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="1s"
	I0816 17:45:49.327528       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="1s"
	I0816 17:45:50.194790       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I0816 17:45:50.313466       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="1s"
	I0816 17:45:50.333449       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I0816 17:45:50.335734       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="1s"
	I0816 17:45:50.347719       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="1s"
	I0816 17:45:50.353996       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="1s"
	I0816 17:45:51.200900       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I0816 17:45:51.210528       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I0816 17:45:51.218147       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I0816 17:46:13.290053       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="gcp-auth/gcp-auth-89d5ffd79" duration="17.094281ms"
	I0816 17:46:13.290353       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="gcp-auth/gcp-auth-89d5ffd79" duration="55.745µs"
	I0816 17:46:20.026957       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="0s"
	I0816 17:46:20.064604       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="0s"
	I0816 17:46:21.011219       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="0s"
	I0816 17:46:21.040720       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="0s"
	I0816 17:46:29.995487       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="volcano-system/volcano-admission-init" delay="0s"
	I0816 17:46:30.130101       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="addons-864899"
	
	
	==> kube-proxy [f7a1610f3e767e4782fa7d7517f2d01e8327bf5e1c63e00d5acfc258192d2904] <==
	I0816 17:43:32.392859       1 server_linux.go:66] "Using iptables proxy"
	I0816 17:43:32.525841       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	E0816 17:43:32.525904       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0816 17:43:32.570322       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0816 17:43:32.575187       1 server_linux.go:169] "Using iptables Proxier"
	I0816 17:43:32.581764       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0816 17:43:32.582083       1 server.go:483] "Version info" version="v1.31.0"
	I0816 17:43:32.582100       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0816 17:43:32.592269       1 config.go:197] "Starting service config controller"
	I0816 17:43:32.592306       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0816 17:43:32.592431       1 config.go:104] "Starting endpoint slice config controller"
	I0816 17:43:32.592438       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0816 17:43:32.593180       1 config.go:326] "Starting node config controller"
	I0816 17:43:32.593191       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0816 17:43:32.693362       1 shared_informer.go:320] Caches are synced for node config
	I0816 17:43:32.693424       1 shared_informer.go:320] Caches are synced for service config
	I0816 17:43:32.693509       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [24f36c489fe0409df147281d7eb860eaeaf99c2bf8e26aeeaecee79d50753a0b] <==
	E0816 17:43:24.107851       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0816 17:43:24.106343       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0816 17:43:24.108034       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0816 17:43:24.106395       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0816 17:43:24.108199       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0816 17:43:24.106445       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0816 17:43:24.108354       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0816 17:43:24.106491       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0816 17:43:24.108509       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0816 17:43:24.106530       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0816 17:43:24.108662       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0816 17:43:24.106565       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0816 17:43:24.108840       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0816 17:43:24.106612       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0816 17:43:24.109015       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0816 17:43:24.106654       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0816 17:43:24.109209       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0816 17:43:24.106714       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0816 17:43:24.109379       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0816 17:43:24.106757       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0816 17:43:24.109618       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0816 17:43:24.109959       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0816 17:43:24.110115       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	E0816 17:43:24.110240       1 reflector.go:158] "Unhandled Error" err="runtime/asm_arm64.s:1222: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I0816 17:43:25.599145       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Aug 16 17:47:29 addons-864899 kubelet[1499]: E0816 17:47:29.384944    1499 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=gadget pod=gadget-tcnmn_gadget(0cc0b8e3-9aeb-405d-9f9c-4a37b75275cf)\"" pod="gadget/gadget-tcnmn" podUID="0cc0b8e3-9aeb-405d-9f9c-4a37b75275cf"
	Aug 16 17:47:36 addons-864899 kubelet[1499]: I0816 17:47:36.368672    1499 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-6fb4cdfc84-pvhcl" secret="" err="secret \"gcp-auth\" not found"
	Aug 16 17:47:40 addons-864899 kubelet[1499]: I0816 17:47:40.368194    1499 scope.go:117] "RemoveContainer" containerID="0664fe9de8aae822ff0e67a02eb633ccfb6985e6f8f8209eb8ec1e91285f1c48"
	Aug 16 17:47:40 addons-864899 kubelet[1499]: E0816 17:47:40.368821    1499 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=gadget pod=gadget-tcnmn_gadget(0cc0b8e3-9aeb-405d-9f9c-4a37b75275cf)\"" pod="gadget/gadget-tcnmn" podUID="0cc0b8e3-9aeb-405d-9f9c-4a37b75275cf"
	Aug 16 17:47:46 addons-864899 kubelet[1499]: I0816 17:47:46.368938    1499 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-hftrd" secret="" err="secret \"gcp-auth\" not found"
	Aug 16 17:47:51 addons-864899 kubelet[1499]: I0816 17:47:51.368417    1499 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/nvidia-device-plugin-daemonset-k9vv2" secret="" err="secret \"gcp-auth\" not found"
	Aug 16 17:47:54 addons-864899 kubelet[1499]: I0816 17:47:54.368713    1499 scope.go:117] "RemoveContainer" containerID="0664fe9de8aae822ff0e67a02eb633ccfb6985e6f8f8209eb8ec1e91285f1c48"
	Aug 16 17:47:54 addons-864899 kubelet[1499]: E0816 17:47:54.369544    1499 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=gadget pod=gadget-tcnmn_gadget(0cc0b8e3-9aeb-405d-9f9c-4a37b75275cf)\"" pod="gadget/gadget-tcnmn" podUID="0cc0b8e3-9aeb-405d-9f9c-4a37b75275cf"
	Aug 16 17:48:09 addons-864899 kubelet[1499]: I0816 17:48:09.369337    1499 scope.go:117] "RemoveContainer" containerID="0664fe9de8aae822ff0e67a02eb633ccfb6985e6f8f8209eb8ec1e91285f1c48"
	Aug 16 17:48:09 addons-864899 kubelet[1499]: E0816 17:48:09.369543    1499 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=gadget pod=gadget-tcnmn_gadget(0cc0b8e3-9aeb-405d-9f9c-4a37b75275cf)\"" pod="gadget/gadget-tcnmn" podUID="0cc0b8e3-9aeb-405d-9f9c-4a37b75275cf"
	Aug 16 17:48:23 addons-864899 kubelet[1499]: I0816 17:48:23.367879    1499 scope.go:117] "RemoveContainer" containerID="0664fe9de8aae822ff0e67a02eb633ccfb6985e6f8f8209eb8ec1e91285f1c48"
	Aug 16 17:48:23 addons-864899 kubelet[1499]: E0816 17:48:23.368113    1499 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=gadget pod=gadget-tcnmn_gadget(0cc0b8e3-9aeb-405d-9f9c-4a37b75275cf)\"" pod="gadget/gadget-tcnmn" podUID="0cc0b8e3-9aeb-405d-9f9c-4a37b75275cf"
	Aug 16 17:48:38 addons-864899 kubelet[1499]: I0816 17:48:38.368940    1499 scope.go:117] "RemoveContainer" containerID="0664fe9de8aae822ff0e67a02eb633ccfb6985e6f8f8209eb8ec1e91285f1c48"
	Aug 16 17:48:38 addons-864899 kubelet[1499]: E0816 17:48:38.369187    1499 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=gadget pod=gadget-tcnmn_gadget(0cc0b8e3-9aeb-405d-9f9c-4a37b75275cf)\"" pod="gadget/gadget-tcnmn" podUID="0cc0b8e3-9aeb-405d-9f9c-4a37b75275cf"
	Aug 16 17:48:51 addons-864899 kubelet[1499]: I0816 17:48:51.368778    1499 scope.go:117] "RemoveContainer" containerID="0664fe9de8aae822ff0e67a02eb633ccfb6985e6f8f8209eb8ec1e91285f1c48"
	Aug 16 17:48:51 addons-864899 kubelet[1499]: E0816 17:48:51.369041    1499 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=gadget pod=gadget-tcnmn_gadget(0cc0b8e3-9aeb-405d-9f9c-4a37b75275cf)\"" pod="gadget/gadget-tcnmn" podUID="0cc0b8e3-9aeb-405d-9f9c-4a37b75275cf"
	Aug 16 17:49:02 addons-864899 kubelet[1499]: I0816 17:49:02.368067    1499 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/nvidia-device-plugin-daemonset-k9vv2" secret="" err="secret \"gcp-auth\" not found"
	Aug 16 17:49:03 addons-864899 kubelet[1499]: I0816 17:49:03.367854    1499 scope.go:117] "RemoveContainer" containerID="0664fe9de8aae822ff0e67a02eb633ccfb6985e6f8f8209eb8ec1e91285f1c48"
	Aug 16 17:49:03 addons-864899 kubelet[1499]: E0816 17:49:03.368053    1499 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=gadget pod=gadget-tcnmn_gadget(0cc0b8e3-9aeb-405d-9f9c-4a37b75275cf)\"" pod="gadget/gadget-tcnmn" podUID="0cc0b8e3-9aeb-405d-9f9c-4a37b75275cf"
	Aug 16 17:49:06 addons-864899 kubelet[1499]: I0816 17:49:06.369910    1499 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-6fb4cdfc84-pvhcl" secret="" err="secret \"gcp-auth\" not found"
	Aug 16 17:49:10 addons-864899 kubelet[1499]: I0816 17:49:10.368114    1499 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-hftrd" secret="" err="secret \"gcp-auth\" not found"
	Aug 16 17:49:17 addons-864899 kubelet[1499]: I0816 17:49:17.368598    1499 scope.go:117] "RemoveContainer" containerID="0664fe9de8aae822ff0e67a02eb633ccfb6985e6f8f8209eb8ec1e91285f1c48"
	Aug 16 17:49:17 addons-864899 kubelet[1499]: E0816 17:49:17.368813    1499 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=gadget pod=gadget-tcnmn_gadget(0cc0b8e3-9aeb-405d-9f9c-4a37b75275cf)\"" pod="gadget/gadget-tcnmn" podUID="0cc0b8e3-9aeb-405d-9f9c-4a37b75275cf"
	Aug 16 17:49:30 addons-864899 kubelet[1499]: I0816 17:49:30.368370    1499 scope.go:117] "RemoveContainer" containerID="0664fe9de8aae822ff0e67a02eb633ccfb6985e6f8f8209eb8ec1e91285f1c48"
	Aug 16 17:49:30 addons-864899 kubelet[1499]: E0816 17:49:30.369068    1499 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=gadget pod=gadget-tcnmn_gadget(0cc0b8e3-9aeb-405d-9f9c-4a37b75275cf)\"" pod="gadget/gadget-tcnmn" podUID="0cc0b8e3-9aeb-405d-9f9c-4a37b75275cf"
	
	
	==> storage-provisioner [6aea42a6df509f2ebd7dfe76b3c1efade576609e1c47f60a59f73285c8a2e657] <==
	I0816 17:43:37.592779       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0816 17:43:37.636311       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0816 17:43:37.636375       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0816 17:43:37.668240       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0816 17:43:37.668302       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"1c38b8ed-74e7-44d2-b8d3-ae26d5a66688", APIVersion:"v1", ResourceVersion:"570", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-864899_4d1bf025-ed0a-4c95-9177-27773af28af6 became leader
	I0816 17:43:37.668814       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-864899_4d1bf025-ed0a-4c95-9177-27773af28af6!
	I0816 17:43:37.769445       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-864899_4d1bf025-ed0a-4c95-9177-27773af28af6!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-864899 -n addons-864899
helpers_test.go:261: (dbg) Run:  kubectl --context addons-864899 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: ingress-nginx-admission-create-zpv9t ingress-nginx-admission-patch-vd7pb test-job-nginx-0
helpers_test.go:274: ======> post-mortem[TestAddons/serial/Volcano]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-864899 describe pod ingress-nginx-admission-create-zpv9t ingress-nginx-admission-patch-vd7pb test-job-nginx-0
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-864899 describe pod ingress-nginx-admission-create-zpv9t ingress-nginx-admission-patch-vd7pb test-job-nginx-0: exit status 1 (86.414361ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-zpv9t" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-vd7pb" not found
	Error from server (NotFound): pods "test-job-nginx-0" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context addons-864899 describe pod ingress-nginx-admission-create-zpv9t ingress-nginx-admission-patch-vd7pb test-job-nginx-0: exit status 1
--- FAIL: TestAddons/serial/Volcano (199.84s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (375.02s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-686713 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.20.0
E0816 18:31:13.788273  293371 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-287979/.minikube/profiles/addons-864899/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p old-k8s-version-686713 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.20.0: exit status 102 (6m11.046913847s)

                                                
                                                
-- stdout --
	* [old-k8s-version-686713] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19461
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19461-287979/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19461-287979/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.31.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.0
	* Using the docker driver based on existing profile
	* Starting "old-k8s-version-686713" primary control-plane node in "old-k8s-version-686713" cluster
	* Pulling base image v0.0.44-1723740748-19452 ...
	* Restarting existing docker container for "old-k8s-version-686713" ...
	* Preparing Kubernetes v1.20.0 on containerd 1.7.20 ...
	* Verifying Kubernetes components...
	  - Using image fake.domain/registry.k8s.io/echoserver:1.4
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	  - Using image docker.io/kubernetesui/dashboard:v2.7.0
	  - Using image registry.k8s.io/echoserver:1.4
	* Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p old-k8s-version-686713 addons enable metrics-server
	
	* Enabled addons: default-storageclass, metrics-server, storage-provisioner, dashboard
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0816 18:30:47.831571  495127 out.go:345] Setting OutFile to fd 1 ...
	I0816 18:30:47.831865  495127 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 18:30:47.831880  495127 out.go:358] Setting ErrFile to fd 2...
	I0816 18:30:47.831886  495127 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 18:30:47.832151  495127 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19461-287979/.minikube/bin
	I0816 18:30:47.832766  495127 out.go:352] Setting JSON to false
	I0816 18:30:47.833907  495127 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":7978,"bootTime":1723825070,"procs":187,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0816 18:30:47.833982  495127 start.go:139] virtualization:  
	I0816 18:30:47.836685  495127 out.go:177] * [old-k8s-version-686713] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	I0816 18:30:47.838363  495127 out.go:177]   - MINIKUBE_LOCATION=19461
	I0816 18:30:47.838425  495127 notify.go:220] Checking for updates...
	I0816 18:30:47.847675  495127 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0816 18:30:47.849771  495127 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19461-287979/kubeconfig
	I0816 18:30:47.851935  495127 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19461-287979/.minikube
	I0816 18:30:47.854069  495127 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0816 18:30:47.856084  495127 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0816 18:30:47.858446  495127 config.go:182] Loaded profile config "old-k8s-version-686713": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
	I0816 18:30:47.860910  495127 out.go:177] * Kubernetes 1.31.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.0
	I0816 18:30:47.862906  495127 driver.go:392] Setting default libvirt URI to qemu:///system
	I0816 18:30:47.887460  495127 docker.go:123] docker version: linux-27.1.2:Docker Engine - Community
	I0816 18:30:47.887585  495127 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0816 18:30:47.963884  495127 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:50 OomKillDisable:true NGoroutines:68 SystemTime:2024-08-16 18:30:47.951690016 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1067-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214900736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8fc6bcff51318944179630522a095cc9dbf9f353 Expected:8fc6bcff51318944179630522a095cc9dbf9f353} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1]] Warnings:<nil>}}
	I0816 18:30:47.964007  495127 docker.go:307] overlay module found
	I0816 18:30:47.967151  495127 out.go:177] * Using the docker driver based on existing profile
	I0816 18:30:47.968979  495127 start.go:297] selected driver: docker
	I0816 18:30:47.969064  495127 start.go:901] validating driver "docker" against &{Name:old-k8s-version-686713 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-686713 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0816 18:30:47.969195  495127 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0816 18:30:47.969884  495127 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0816 18:30:48.022713  495127 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:50 OomKillDisable:true NGoroutines:68 SystemTime:2024-08-16 18:30:48.012692444 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1067-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214900736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8fc6bcff51318944179630522a095cc9dbf9f353 Expected:8fc6bcff51318944179630522a095cc9dbf9f353} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1]] Warnings:<nil>}}
	I0816 18:30:48.023086  495127 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0816 18:30:48.023118  495127 cni.go:84] Creating CNI manager for ""
	I0816 18:30:48.023127  495127 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0816 18:30:48.023173  495127 start.go:340] cluster config:
	{Name:old-k8s-version-686713 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-686713 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:do
cker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0816 18:30:48.025575  495127 out.go:177] * Starting "old-k8s-version-686713" primary control-plane node in "old-k8s-version-686713" cluster
	I0816 18:30:48.027882  495127 cache.go:121] Beginning downloading kic base image for docker with containerd
	I0816 18:30:48.029805  495127 out.go:177] * Pulling base image v0.0.44-1723740748-19452 ...
	I0816 18:30:48.031999  495127 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime containerd
	I0816 18:30:48.032066  495127 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19461-287979/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4
	I0816 18:30:48.032080  495127 cache.go:56] Caching tarball of preloaded images
	I0816 18:30:48.032100  495127 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d in local docker daemon
	I0816 18:30:48.032173  495127 preload.go:172] Found /home/jenkins/minikube-integration/19461-287979/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I0816 18:30:48.032183  495127 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on containerd
	I0816 18:30:48.032308  495127 profile.go:143] Saving config to /home/jenkins/minikube-integration/19461-287979/.minikube/profiles/old-k8s-version-686713/config.json ...
	W0816 18:30:48.057457  495127 image.go:95] image gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d is of wrong architecture
	I0816 18:30:48.057478  495127 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d to local cache
	I0816 18:30:48.057562  495127 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d in local cache directory
	I0816 18:30:48.057589  495127 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d in local cache directory, skipping pull
	I0816 18:30:48.057599  495127 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d exists in cache, skipping pull
	I0816 18:30:48.057609  495127 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d as a tarball
	I0816 18:30:48.057619  495127 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d from local cache
	I0816 18:30:48.181378  495127 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d from cached tarball
	I0816 18:30:48.181416  495127 cache.go:194] Successfully downloaded all kic artifacts
	I0816 18:30:48.181456  495127 start.go:360] acquireMachinesLock for old-k8s-version-686713: {Name:mk79ed0955f8e962903fbe1daa0253b6e320d23b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 18:30:48.181523  495127 start.go:364] duration metric: took 43.118µs to acquireMachinesLock for "old-k8s-version-686713"
	I0816 18:30:48.181549  495127 start.go:96] Skipping create...Using existing machine configuration
	I0816 18:30:48.181556  495127 fix.go:54] fixHost starting: 
	I0816 18:30:48.181851  495127 cli_runner.go:164] Run: docker container inspect old-k8s-version-686713 --format={{.State.Status}}
	I0816 18:30:48.203857  495127 fix.go:112] recreateIfNeeded on old-k8s-version-686713: state=Stopped err=<nil>
	W0816 18:30:48.203888  495127 fix.go:138] unexpected machine state, will restart: <nil>
	I0816 18:30:48.206597  495127 out.go:177] * Restarting existing docker container for "old-k8s-version-686713" ...
	I0816 18:30:48.208978  495127 cli_runner.go:164] Run: docker start old-k8s-version-686713
	I0816 18:30:48.535309  495127 cli_runner.go:164] Run: docker container inspect old-k8s-version-686713 --format={{.State.Status}}
	I0816 18:30:48.566667  495127 kic.go:430] container "old-k8s-version-686713" state is running.
	I0816 18:30:48.569319  495127 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-686713
	I0816 18:30:48.591125  495127 profile.go:143] Saving config to /home/jenkins/minikube-integration/19461-287979/.minikube/profiles/old-k8s-version-686713/config.json ...
	I0816 18:30:48.591474  495127 machine.go:93] provisionDockerMachine start ...
	I0816 18:30:48.591598  495127 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-686713
	I0816 18:30:48.629357  495127 main.go:141] libmachine: Using SSH client type: native
	I0816 18:30:48.629716  495127 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e49d0] 0x3e7230 <nil>  [] 0s} 127.0.0.1 33435 <nil> <nil>}
	I0816 18:30:48.629731  495127 main.go:141] libmachine: About to run SSH command:
	hostname
	I0816 18:30:48.630970  495127 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I0816 18:30:51.764892  495127 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-686713
	
	I0816 18:30:51.764920  495127 ubuntu.go:169] provisioning hostname "old-k8s-version-686713"
	I0816 18:30:51.765009  495127 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-686713
	I0816 18:30:51.785781  495127 main.go:141] libmachine: Using SSH client type: native
	I0816 18:30:51.786027  495127 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e49d0] 0x3e7230 <nil>  [] 0s} 127.0.0.1 33435 <nil> <nil>}
	I0816 18:30:51.786045  495127 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-686713 && echo "old-k8s-version-686713" | sudo tee /etc/hostname
	I0816 18:30:51.944719  495127 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-686713
	
	I0816 18:30:51.944801  495127 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-686713
	I0816 18:30:51.991490  495127 main.go:141] libmachine: Using SSH client type: native
	I0816 18:30:51.991756  495127 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e49d0] 0x3e7230 <nil>  [] 0s} 127.0.0.1 33435 <nil> <nil>}
	I0816 18:30:51.991777  495127 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-686713' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-686713/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-686713' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0816 18:30:52.141077  495127 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0816 18:30:52.141102  495127 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19461-287979/.minikube CaCertPath:/home/jenkins/minikube-integration/19461-287979/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19461-287979/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19461-287979/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19461-287979/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19461-287979/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19461-287979/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19461-287979/.minikube}
	I0816 18:30:52.141135  495127 ubuntu.go:177] setting up certificates
	I0816 18:30:52.141145  495127 provision.go:84] configureAuth start
	I0816 18:30:52.141205  495127 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-686713
	I0816 18:30:52.161179  495127 provision.go:143] copyHostCerts
	I0816 18:30:52.161241  495127 exec_runner.go:144] found /home/jenkins/minikube-integration/19461-287979/.minikube/ca.pem, removing ...
	I0816 18:30:52.161249  495127 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19461-287979/.minikube/ca.pem
	I0816 18:30:52.161324  495127 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19461-287979/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19461-287979/.minikube/ca.pem (1078 bytes)
	I0816 18:30:52.161418  495127 exec_runner.go:144] found /home/jenkins/minikube-integration/19461-287979/.minikube/cert.pem, removing ...
	I0816 18:30:52.161424  495127 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19461-287979/.minikube/cert.pem
	I0816 18:30:52.161448  495127 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19461-287979/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19461-287979/.minikube/cert.pem (1123 bytes)
	I0816 18:30:52.161498  495127 exec_runner.go:144] found /home/jenkins/minikube-integration/19461-287979/.minikube/key.pem, removing ...
	I0816 18:30:52.161505  495127 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19461-287979/.minikube/key.pem
	I0816 18:30:52.161528  495127 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19461-287979/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19461-287979/.minikube/key.pem (1679 bytes)
	I0816 18:30:52.161581  495127 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19461-287979/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19461-287979/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19461-287979/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-686713 san=[127.0.0.1 192.168.85.2 localhost minikube old-k8s-version-686713]
	I0816 18:30:52.971249  495127 provision.go:177] copyRemoteCerts
	I0816 18:30:52.971375  495127 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0816 18:30:52.971443  495127 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-686713
	I0816 18:30:52.989861  495127 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33435 SSHKeyPath:/home/jenkins/minikube-integration/19461-287979/.minikube/machines/old-k8s-version-686713/id_rsa Username:docker}
	I0816 18:30:53.087261  495127 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-287979/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0816 18:30:53.115072  495127 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-287979/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0816 18:30:53.145496  495127 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-287979/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0816 18:30:53.172329  495127 provision.go:87] duration metric: took 1.031169316s to configureAuth
	I0816 18:30:53.172406  495127 ubuntu.go:193] setting minikube options for container-runtime
	I0816 18:30:53.172655  495127 config.go:182] Loaded profile config "old-k8s-version-686713": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
	I0816 18:30:53.172686  495127 machine.go:96] duration metric: took 4.581200475s to provisionDockerMachine
	I0816 18:30:53.172720  495127 start.go:293] postStartSetup for "old-k8s-version-686713" (driver="docker")
	I0816 18:30:53.172749  495127 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0816 18:30:53.172842  495127 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0816 18:30:53.172918  495127 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-686713
	I0816 18:30:53.190236  495127 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33435 SSHKeyPath:/home/jenkins/minikube-integration/19461-287979/.minikube/machines/old-k8s-version-686713/id_rsa Username:docker}
	I0816 18:30:53.286523  495127 ssh_runner.go:195] Run: cat /etc/os-release
	I0816 18:30:53.290222  495127 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0816 18:30:53.290261  495127 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0816 18:30:53.290272  495127 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0816 18:30:53.290279  495127 info.go:137] Remote host: Ubuntu 22.04.4 LTS
	I0816 18:30:53.290289  495127 filesync.go:126] Scanning /home/jenkins/minikube-integration/19461-287979/.minikube/addons for local assets ...
	I0816 18:30:53.290340  495127 filesync.go:126] Scanning /home/jenkins/minikube-integration/19461-287979/.minikube/files for local assets ...
	I0816 18:30:53.290421  495127 filesync.go:149] local asset: /home/jenkins/minikube-integration/19461-287979/.minikube/files/etc/ssl/certs/2933712.pem -> 2933712.pem in /etc/ssl/certs
	I0816 18:30:53.290527  495127 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0816 18:30:53.299973  495127 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-287979/.minikube/files/etc/ssl/certs/2933712.pem --> /etc/ssl/certs/2933712.pem (1708 bytes)
	I0816 18:30:53.326103  495127 start.go:296] duration metric: took 153.351132ms for postStartSetup
	I0816 18:30:53.326182  495127 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0816 18:30:53.326232  495127 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-686713
	I0816 18:30:53.345231  495127 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33435 SSHKeyPath:/home/jenkins/minikube-integration/19461-287979/.minikube/machines/old-k8s-version-686713/id_rsa Username:docker}
	I0816 18:30:53.434350  495127 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0816 18:30:53.441318  495127 fix.go:56] duration metric: took 5.259754736s for fixHost
	I0816 18:30:53.441340  495127 start.go:83] releasing machines lock for "old-k8s-version-686713", held for 5.259802908s
	I0816 18:30:53.441407  495127 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-686713
	I0816 18:30:53.464723  495127 ssh_runner.go:195] Run: cat /version.json
	I0816 18:30:53.464785  495127 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-686713
	I0816 18:30:53.465027  495127 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0816 18:30:53.465091  495127 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-686713
	I0816 18:30:53.495535  495127 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33435 SSHKeyPath:/home/jenkins/minikube-integration/19461-287979/.minikube/machines/old-k8s-version-686713/id_rsa Username:docker}
	I0816 18:30:53.501968  495127 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33435 SSHKeyPath:/home/jenkins/minikube-integration/19461-287979/.minikube/machines/old-k8s-version-686713/id_rsa Username:docker}
	I0816 18:30:53.605030  495127 ssh_runner.go:195] Run: systemctl --version
	I0816 18:30:53.735465  495127 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0816 18:30:53.739952  495127 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0816 18:30:53.756519  495127 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0816 18:30:53.756623  495127 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0816 18:30:53.765380  495127 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0816 18:30:53.765407  495127 start.go:495] detecting cgroup driver to use...
	I0816 18:30:53.765464  495127 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0816 18:30:53.765538  495127 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0816 18:30:53.780084  495127 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0816 18:30:53.792626  495127 docker.go:217] disabling cri-docker service (if available) ...
	I0816 18:30:53.792737  495127 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0816 18:30:53.805835  495127 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0816 18:30:53.817569  495127 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0816 18:30:53.922828  495127 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0816 18:30:54.032767  495127 docker.go:233] disabling docker service ...
	I0816 18:30:54.032869  495127 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0816 18:30:54.049023  495127 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0816 18:30:54.062870  495127 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0816 18:30:54.173969  495127 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0816 18:30:54.302818  495127 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0816 18:30:54.315537  495127 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0816 18:30:54.335376  495127 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.2"|' /etc/containerd/config.toml"
	I0816 18:30:54.346622  495127 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0816 18:30:54.360007  495127 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0816 18:30:54.360108  495127 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0816 18:30:54.371773  495127 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0816 18:30:54.381183  495127 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0816 18:30:54.390466  495127 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0816 18:30:54.400077  495127 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0816 18:30:54.408856  495127 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0816 18:30:54.419083  495127 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0816 18:30:54.427856  495127 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0816 18:30:54.436240  495127 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 18:30:54.534755  495127 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0816 18:30:54.742435  495127 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I0816 18:30:54.742543  495127 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0816 18:30:54.746585  495127 start.go:563] Will wait 60s for crictl version
	I0816 18:30:54.746697  495127 ssh_runner.go:195] Run: which crictl
	I0816 18:30:54.750255  495127 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0816 18:30:54.791438  495127 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.7.20
	RuntimeApiVersion:  v1
	I0816 18:30:54.791551  495127 ssh_runner.go:195] Run: containerd --version
	I0816 18:30:54.814766  495127 ssh_runner.go:195] Run: containerd --version
	I0816 18:30:54.843446  495127 out.go:177] * Preparing Kubernetes v1.20.0 on containerd 1.7.20 ...
	I0816 18:30:54.845180  495127 cli_runner.go:164] Run: docker network inspect old-k8s-version-686713 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0816 18:30:54.868717  495127 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I0816 18:30:54.872665  495127 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0816 18:30:54.883527  495127 kubeadm.go:883] updating cluster {Name:old-k8s-version-686713 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-686713 Namespace:default APIServerHAVIP: APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/
home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0816 18:30:54.883709  495127 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime containerd
	I0816 18:30:54.883773  495127 ssh_runner.go:195] Run: sudo crictl images --output json
	I0816 18:30:54.939247  495127 containerd.go:627] all images are preloaded for containerd runtime.
	I0816 18:30:54.939274  495127 containerd.go:534] Images already preloaded, skipping extraction
	I0816 18:30:54.939336  495127 ssh_runner.go:195] Run: sudo crictl images --output json
	I0816 18:30:55.003047  495127 containerd.go:627] all images are preloaded for containerd runtime.
	I0816 18:30:55.003077  495127 cache_images.go:84] Images are preloaded, skipping loading
	I0816 18:30:55.003087  495127 kubeadm.go:934] updating node { 192.168.85.2 8443 v1.20.0 containerd true true} ...
	I0816 18:30:55.003291  495127 kubeadm.go:946] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=old-k8s-version-686713 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-686713 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0816 18:30:55.003393  495127 ssh_runner.go:195] Run: sudo crictl info
	I0816 18:30:55.065289  495127 cni.go:84] Creating CNI manager for ""
	I0816 18:30:55.065319  495127 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0816 18:30:55.065330  495127 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0816 18:30:55.065382  495127 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-686713 NodeName:old-k8s-version-686713 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt
StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0816 18:30:55.065561  495127 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "old-k8s-version-686713"
	  kubeletExtraArgs:
	    node-ip: 192.168.85.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0816 18:30:55.065674  495127 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0816 18:30:55.076665  495127 binaries.go:44] Found k8s binaries, skipping transfer
	I0816 18:30:55.076790  495127 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0816 18:30:55.086982  495127 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (442 bytes)
	I0816 18:30:55.109056  495127 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0816 18:30:55.130795  495127 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2125 bytes)
	I0816 18:30:55.151224  495127 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I0816 18:30:55.155446  495127 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0816 18:30:55.167745  495127 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 18:30:55.274219  495127 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0816 18:30:55.290402  495127 certs.go:68] Setting up /home/jenkins/minikube-integration/19461-287979/.minikube/profiles/old-k8s-version-686713 for IP: 192.168.85.2
	I0816 18:30:55.290434  495127 certs.go:194] generating shared ca certs ...
	I0816 18:30:55.290467  495127 certs.go:226] acquiring lock for ca certs: {Name:mkc2317239a75a145c30b6075675eef6239ccdc8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 18:30:55.290662  495127 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19461-287979/.minikube/ca.key
	I0816 18:30:55.290745  495127 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19461-287979/.minikube/proxy-client-ca.key
	I0816 18:30:55.290760  495127 certs.go:256] generating profile certs ...
	I0816 18:30:55.290881  495127 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19461-287979/.minikube/profiles/old-k8s-version-686713/client.key
	I0816 18:30:55.290991  495127 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19461-287979/.minikube/profiles/old-k8s-version-686713/apiserver.key.b45cdf4f
	I0816 18:30:55.291069  495127 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19461-287979/.minikube/profiles/old-k8s-version-686713/proxy-client.key
	I0816 18:30:55.291231  495127 certs.go:484] found cert: /home/jenkins/minikube-integration/19461-287979/.minikube/certs/293371.pem (1338 bytes)
	W0816 18:30:55.291293  495127 certs.go:480] ignoring /home/jenkins/minikube-integration/19461-287979/.minikube/certs/293371_empty.pem, impossibly tiny 0 bytes
	I0816 18:30:55.291308  495127 certs.go:484] found cert: /home/jenkins/minikube-integration/19461-287979/.minikube/certs/ca-key.pem (1675 bytes)
	I0816 18:30:55.291355  495127 certs.go:484] found cert: /home/jenkins/minikube-integration/19461-287979/.minikube/certs/ca.pem (1078 bytes)
	I0816 18:30:55.291414  495127 certs.go:484] found cert: /home/jenkins/minikube-integration/19461-287979/.minikube/certs/cert.pem (1123 bytes)
	I0816 18:30:55.291456  495127 certs.go:484] found cert: /home/jenkins/minikube-integration/19461-287979/.minikube/certs/key.pem (1679 bytes)
	I0816 18:30:55.291534  495127 certs.go:484] found cert: /home/jenkins/minikube-integration/19461-287979/.minikube/files/etc/ssl/certs/2933712.pem (1708 bytes)
	I0816 18:30:55.292239  495127 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-287979/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0816 18:30:55.330071  495127 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-287979/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0816 18:30:55.380287  495127 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-287979/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0816 18:30:55.428540  495127 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-287979/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0816 18:30:55.496725  495127 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-287979/.minikube/profiles/old-k8s-version-686713/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0816 18:30:55.527549  495127 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-287979/.minikube/profiles/old-k8s-version-686713/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0816 18:30:55.554044  495127 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-287979/.minikube/profiles/old-k8s-version-686713/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0816 18:30:55.579659  495127 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-287979/.minikube/profiles/old-k8s-version-686713/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0816 18:30:55.606548  495127 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-287979/.minikube/files/etc/ssl/certs/2933712.pem --> /usr/share/ca-certificates/2933712.pem (1708 bytes)
	I0816 18:30:55.631979  495127 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-287979/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0816 18:30:55.657911  495127 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19461-287979/.minikube/certs/293371.pem --> /usr/share/ca-certificates/293371.pem (1338 bytes)
	I0816 18:30:55.689044  495127 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0816 18:30:55.715167  495127 ssh_runner.go:195] Run: openssl version
	I0816 18:30:55.720821  495127 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2933712.pem && ln -fs /usr/share/ca-certificates/2933712.pem /etc/ssl/certs/2933712.pem"
	I0816 18:30:55.733243  495127 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2933712.pem
	I0816 18:30:55.736875  495127 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 16 17:53 /usr/share/ca-certificates/2933712.pem
	I0816 18:30:55.736959  495127 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2933712.pem
	I0816 18:30:55.744005  495127 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2933712.pem /etc/ssl/certs/3ec20f2e.0"
	I0816 18:30:55.756264  495127 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0816 18:30:55.765960  495127 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0816 18:30:55.772258  495127 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 16 17:43 /usr/share/ca-certificates/minikubeCA.pem
	I0816 18:30:55.772366  495127 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0816 18:30:55.779880  495127 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0816 18:30:55.789612  495127 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/293371.pem && ln -fs /usr/share/ca-certificates/293371.pem /etc/ssl/certs/293371.pem"
	I0816 18:30:55.800608  495127 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/293371.pem
	I0816 18:30:55.807772  495127 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 16 17:53 /usr/share/ca-certificates/293371.pem
	I0816 18:30:55.807866  495127 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/293371.pem
	I0816 18:30:55.815672  495127 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/293371.pem /etc/ssl/certs/51391683.0"
	I0816 18:30:55.824780  495127 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0816 18:30:55.828571  495127 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0816 18:30:55.835764  495127 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0816 18:30:55.843309  495127 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0816 18:30:55.850399  495127 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0816 18:30:55.857511  495127 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0816 18:30:55.864506  495127 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0816 18:30:55.871475  495127 kubeadm.go:392] StartCluster: {Name:old-k8s-version-686713 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-686713 Namespace:default APIServerHAVIP: APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/hom
e/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0816 18:30:55.871580  495127 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0816 18:30:55.871649  495127 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0816 18:30:55.921295  495127 cri.go:89] found id: "fd426e802a394dd085a78eceaa9e1ef8b8a1729bd688f37f444eee97c2e33625"
	I0816 18:30:55.921338  495127 cri.go:89] found id: "b1c5cc5a08d84ccf10b870b1da860bd7634039448156c7111a600e3a79596432"
	I0816 18:30:55.921350  495127 cri.go:89] found id: "c748f14e05e0dcf73020ce634fd9624ce85968bbe21c733540cb7dfc37da5535"
	I0816 18:30:55.921355  495127 cri.go:89] found id: "c8808a95b766ea5da16f28d106fac8ed477a20a335d64d6f24a0d16441629040"
	I0816 18:30:55.921359  495127 cri.go:89] found id: "823d92b3aa088f0ae2ac13e28669fcb44185774a0ec41c94a4f70cb9d841f330"
	I0816 18:30:55.921364  495127 cri.go:89] found id: "b5415dfeafde650399f65c8eaa6780576175b5a6a7f3c05fe903fc0ab1c7752b"
	I0816 18:30:55.921372  495127 cri.go:89] found id: "78be83580cf2485e2d5a7a049e4681acf055c50835844bb627f268dc3bd5943b"
	I0816 18:30:55.921375  495127 cri.go:89] found id: "447b982d5e55ff3f7c2e0daf7eb85d57cc3778df20255ada1e82db270bb3c0ef"
	I0816 18:30:55.921379  495127 cri.go:89] found id: ""
	I0816 18:30:55.921435  495127 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I0816 18:30:55.933537  495127 cri.go:116] JSON = null
	W0816 18:30:55.933600  495127 kubeadm.go:399] unpause failed: list paused: list returned 0 containers, but ps returned 8
	I0816 18:30:55.933670  495127 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0816 18:30:55.942385  495127 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0816 18:30:55.942406  495127 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0816 18:30:55.942468  495127 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0816 18:30:55.950540  495127 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0816 18:30:55.951015  495127 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-686713" does not appear in /home/jenkins/minikube-integration/19461-287979/kubeconfig
	I0816 18:30:55.951152  495127 kubeconfig.go:62] /home/jenkins/minikube-integration/19461-287979/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-686713" cluster setting kubeconfig missing "old-k8s-version-686713" context setting]
	I0816 18:30:55.951467  495127 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19461-287979/kubeconfig: {Name:mkf88e71d9d88c4917ceda8d8c4a2c6c3a01b716 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 18:30:55.953022  495127 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0816 18:30:55.961805  495127 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.85.2
	I0816 18:30:55.961839  495127 kubeadm.go:597] duration metric: took 19.426453ms to restartPrimaryControlPlane
	I0816 18:30:55.961862  495127 kubeadm.go:394] duration metric: took 90.396481ms to StartCluster
	I0816 18:30:55.961877  495127 settings.go:142] acquiring lock: {Name:mke5f8bb0a9e0ea5bfe13ebba62cb869c1a95955 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 18:30:55.961944  495127 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19461-287979/kubeconfig
	I0816 18:30:55.962663  495127 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19461-287979/kubeconfig: {Name:mkf88e71d9d88c4917ceda8d8c4a2c6c3a01b716 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 18:30:55.962894  495127 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0816 18:30:55.963260  495127 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0816 18:30:55.963334  495127 addons.go:69] Setting storage-provisioner=true in profile "old-k8s-version-686713"
	I0816 18:30:55.963363  495127 addons.go:234] Setting addon storage-provisioner=true in "old-k8s-version-686713"
	W0816 18:30:55.963369  495127 addons.go:243] addon storage-provisioner should already be in state true
	I0816 18:30:55.963391  495127 host.go:66] Checking if "old-k8s-version-686713" exists ...
	I0816 18:30:55.963864  495127 cli_runner.go:164] Run: docker container inspect old-k8s-version-686713 --format={{.State.Status}}
	I0816 18:30:55.964330  495127 config.go:182] Loaded profile config "old-k8s-version-686713": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
	I0816 18:30:55.964413  495127 addons.go:69] Setting default-storageclass=true in profile "old-k8s-version-686713"
	I0816 18:30:55.964459  495127 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-686713"
	I0816 18:30:55.964754  495127 cli_runner.go:164] Run: docker container inspect old-k8s-version-686713 --format={{.State.Status}}
	I0816 18:30:55.965096  495127 addons.go:69] Setting metrics-server=true in profile "old-k8s-version-686713"
	I0816 18:30:55.965126  495127 addons.go:234] Setting addon metrics-server=true in "old-k8s-version-686713"
	W0816 18:30:55.965132  495127 addons.go:243] addon metrics-server should already be in state true
	I0816 18:30:55.965162  495127 host.go:66] Checking if "old-k8s-version-686713" exists ...
	I0816 18:30:55.965649  495127 cli_runner.go:164] Run: docker container inspect old-k8s-version-686713 --format={{.State.Status}}
	I0816 18:30:55.967958  495127 addons.go:69] Setting dashboard=true in profile "old-k8s-version-686713"
	I0816 18:30:55.967990  495127 addons.go:234] Setting addon dashboard=true in "old-k8s-version-686713"
	W0816 18:30:55.967997  495127 addons.go:243] addon dashboard should already be in state true
	I0816 18:30:55.968028  495127 host.go:66] Checking if "old-k8s-version-686713" exists ...
	I0816 18:30:55.968429  495127 cli_runner.go:164] Run: docker container inspect old-k8s-version-686713 --format={{.State.Status}}
	I0816 18:30:55.970013  495127 out.go:177] * Verifying Kubernetes components...
	I0816 18:30:55.972634  495127 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0816 18:30:56.029779  495127 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0816 18:30:56.034661  495127 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0816 18:30:56.034693  495127 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0816 18:30:56.034765  495127 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-686713
	I0816 18:30:56.039141  495127 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0816 18:30:56.041777  495127 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0816 18:30:56.041799  495127 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0816 18:30:56.041865  495127 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-686713
	I0816 18:30:56.043824  495127 addons.go:234] Setting addon default-storageclass=true in "old-k8s-version-686713"
	W0816 18:30:56.043844  495127 addons.go:243] addon default-storageclass should already be in state true
	I0816 18:30:56.043869  495127 host.go:66] Checking if "old-k8s-version-686713" exists ...
	I0816 18:30:56.044275  495127 cli_runner.go:164] Run: docker container inspect old-k8s-version-686713 --format={{.State.Status}}
	I0816 18:30:56.066251  495127 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0816 18:30:56.068135  495127 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I0816 18:30:56.070082  495127 addons.go:431] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0816 18:30:56.070107  495127 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0816 18:30:56.070186  495127 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-686713
	I0816 18:30:56.099253  495127 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33435 SSHKeyPath:/home/jenkins/minikube-integration/19461-287979/.minikube/machines/old-k8s-version-686713/id_rsa Username:docker}
	I0816 18:30:56.106818  495127 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0816 18:30:56.106838  495127 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0816 18:30:56.106900  495127 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-686713
	I0816 18:30:56.108131  495127 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33435 SSHKeyPath:/home/jenkins/minikube-integration/19461-287979/.minikube/machines/old-k8s-version-686713/id_rsa Username:docker}
	I0816 18:30:56.142789  495127 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33435 SSHKeyPath:/home/jenkins/minikube-integration/19461-287979/.minikube/machines/old-k8s-version-686713/id_rsa Username:docker}
	I0816 18:30:56.154797  495127 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33435 SSHKeyPath:/home/jenkins/minikube-integration/19461-287979/.minikube/machines/old-k8s-version-686713/id_rsa Username:docker}
	I0816 18:30:56.212350  495127 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0816 18:30:56.255649  495127 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-686713" to be "Ready" ...
	I0816 18:30:56.291739  495127 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0816 18:30:56.327926  495127 addons.go:431] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0816 18:30:56.328020  495127 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0816 18:30:56.409442  495127 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0816 18:30:56.409515  495127 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0816 18:30:56.417487  495127 addons.go:431] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0816 18:30:56.417559  495127 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0816 18:30:56.427695  495127 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0816 18:30:56.479155  495127 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0816 18:30:56.479234  495127 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0816 18:30:56.505549  495127 addons.go:431] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0816 18:30:56.505629  495127 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0816 18:30:56.560319  495127 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0816 18:30:56.560398  495127 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0816 18:30:56.599916  495127 addons.go:431] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0816 18:30:56.599998  495127 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	W0816 18:30:56.613073  495127 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0816 18:30:56.613153  495127 retry.go:31] will retry after 159.329668ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0816 18:30:56.626856  495127 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0816 18:30:56.677449  495127 addons.go:431] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0816 18:30:56.677527  495127 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	W0816 18:30:56.750455  495127 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0816 18:30:56.750535  495127 retry.go:31] will retry after 275.661776ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0816 18:30:56.773150  495127 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0816 18:30:56.783807  495127 addons.go:431] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0816 18:30:56.783885  495127 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	W0816 18:30:56.821579  495127 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0816 18:30:56.821654  495127 retry.go:31] will retry after 367.10823ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0816 18:30:56.862003  495127 addons.go:431] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0816 18:30:56.862080  495127 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	W0816 18:30:56.938642  495127 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0816 18:30:56.938743  495127 retry.go:31] will retry after 395.647143ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0816 18:30:56.943355  495127 addons.go:431] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0816 18:30:56.943433  495127 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0816 18:30:56.963831  495127 addons.go:431] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0816 18:30:56.963903  495127 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0816 18:30:56.983856  495127 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0816 18:30:57.027033  495127 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0816 18:30:57.159113  495127 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0816 18:30:57.159197  495127 retry.go:31] will retry after 128.819233ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0816 18:30:57.169694  495127 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0816 18:30:57.169774  495127 retry.go:31] will retry after 330.508614ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0816 18:30:57.189148  495127 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W0816 18:30:57.289146  495127 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0816 18:30:57.289228  495127 retry.go:31] will retry after 212.907549ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0816 18:30:57.289368  495127 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0816 18:30:57.334684  495127 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0816 18:30:57.393518  495127 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0816 18:30:57.393600  495127 retry.go:31] will retry after 295.552062ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0816 18:30:57.495074  495127 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0816 18:30:57.495154  495127 retry.go:31] will retry after 318.117237ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0816 18:30:57.501411  495127 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0816 18:30:57.502756  495127 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0816 18:30:57.690075  495127 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0816 18:30:57.710266  495127 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0816 18:30:57.710348  495127 retry.go:31] will retry after 806.419791ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0816 18:30:57.710421  495127 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0816 18:30:57.710449  495127 retry.go:31] will retry after 432.33602ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0816 18:30:57.801519  495127 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0816 18:30:57.801600  495127 retry.go:31] will retry after 598.187907ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0816 18:30:57.813919  495127 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0816 18:30:57.904199  495127 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0816 18:30:57.904299  495127 retry.go:31] will retry after 1.14637471s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0816 18:30:58.143612  495127 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W0816 18:30:58.236319  495127 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0816 18:30:58.236351  495127 retry.go:31] will retry after 1.030650636s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0816 18:30:58.256900  495127 node_ready.go:53] error getting node "old-k8s-version-686713": Get "https://192.168.85.2:8443/api/v1/nodes/old-k8s-version-686713": dial tcp 192.168.85.2:8443: connect: connection refused
	I0816 18:30:58.400111  495127 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0816 18:30:58.506503  495127 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0816 18:30:58.506545  495127 retry.go:31] will retry after 894.950708ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0816 18:30:58.517833  495127 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0816 18:30:58.621235  495127 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0816 18:30:58.621282  495127 retry.go:31] will retry after 1.168387935s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0816 18:30:59.051541  495127 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0816 18:30:59.153351  495127 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0816 18:30:59.153397  495127 retry.go:31] will retry after 726.873651ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0816 18:30:59.268193  495127 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W0816 18:30:59.360969  495127 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0816 18:30:59.361023  495127 retry.go:31] will retry after 1.004259138s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0816 18:30:59.402166  495127 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0816 18:30:59.500909  495127 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0816 18:30:59.500958  495127 retry.go:31] will retry after 1.061948419s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0816 18:30:59.789889  495127 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0816 18:30:59.881411  495127 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0816 18:30:59.884009  495127 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0816 18:30:59.884041  495127 retry.go:31] will retry after 1.454891133s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0816 18:30:59.994760  495127 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0816 18:30:59.994796  495127 retry.go:31] will retry after 1.428184418s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0816 18:31:00.266530  495127 node_ready.go:53] error getting node "old-k8s-version-686713": Get "https://192.168.85.2:8443/api/v1/nodes/old-k8s-version-686713": dial tcp 192.168.85.2:8443: connect: connection refused
	I0816 18:31:00.365889  495127 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W0816 18:31:00.475808  495127 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0816 18:31:00.475846  495127 retry.go:31] will retry after 2.793685284s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0816 18:31:00.563145  495127 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0816 18:31:00.679423  495127 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0816 18:31:00.679463  495127 retry.go:31] will retry after 1.868145959s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0816 18:31:01.339374  495127 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0816 18:31:01.423741  495127 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0816 18:31:01.442639  495127 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0816 18:31:01.442673  495127 retry.go:31] will retry after 1.608720629s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0816 18:31:01.544544  495127 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0816 18:31:01.544578  495127 retry.go:31] will retry after 1.767724957s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0816 18:31:02.548302  495127 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0816 18:31:02.679992  495127 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0816 18:31:02.680031  495127 retry.go:31] will retry after 4.101079s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0816 18:31:02.756629  495127 node_ready.go:53] error getting node "old-k8s-version-686713": Get "https://192.168.85.2:8443/api/v1/nodes/old-k8s-version-686713": dial tcp 192.168.85.2:8443: connect: connection refused
	I0816 18:31:03.052472  495127 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0816 18:31:03.161659  495127 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0816 18:31:03.161688  495127 retry.go:31] will retry after 3.500176321s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0816 18:31:03.270565  495127 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0816 18:31:03.312915  495127 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0816 18:31:03.458420  495127 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0816 18:31:03.458452  495127 retry.go:31] will retry after 1.514125238s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0816 18:31:03.530249  495127 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0816 18:31:03.530279  495127 retry.go:31] will retry after 5.167017353s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0816 18:31:04.972911  495127 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0816 18:31:05.256589  495127 node_ready.go:53] error getting node "old-k8s-version-686713": Get "https://192.168.85.2:8443/api/v1/nodes/old-k8s-version-686713": dial tcp 192.168.85.2:8443: connect: connection refused
	W0816 18:31:05.749086  495127 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0816 18:31:05.749117  495127 retry.go:31] will retry after 3.567984165s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0816 18:31:06.662753  495127 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0816 18:31:06.781413  495127 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0816 18:31:08.698279  495127 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0816 18:31:09.317518  495127 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0816 18:31:16.403884  495127 node_ready.go:49] node "old-k8s-version-686713" has status "Ready":"True"
	I0816 18:31:16.403909  495127 node_ready.go:38] duration metric: took 20.148190466s for node "old-k8s-version-686713" to be "Ready" ...
	I0816 18:31:16.403921  495127 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 18:31:16.696393  495127 pod_ready.go:79] waiting up to 6m0s for pod "coredns-74ff55c5b-vdt9d" in "kube-system" namespace to be "Ready" ...
	I0816 18:31:17.207362  495127 pod_ready.go:93] pod "coredns-74ff55c5b-vdt9d" in "kube-system" namespace has status "Ready":"True"
	I0816 18:31:17.207385  495127 pod_ready.go:82] duration metric: took 510.912908ms for pod "coredns-74ff55c5b-vdt9d" in "kube-system" namespace to be "Ready" ...
	I0816 18:31:17.207397  495127 pod_ready.go:79] waiting up to 6m0s for pod "etcd-old-k8s-version-686713" in "kube-system" namespace to be "Ready" ...
	I0816 18:31:17.323496  495127 pod_ready.go:98] node "old-k8s-version-686713" hosting pod "etcd-old-k8s-version-686713" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-686713" has status "Ready":"False"
	I0816 18:31:17.323574  495127 pod_ready.go:82] duration metric: took 116.168621ms for pod "etcd-old-k8s-version-686713" in "kube-system" namespace to be "Ready" ...
	E0816 18:31:17.323600  495127 pod_ready.go:67] WaitExtra: waitPodCondition: node "old-k8s-version-686713" hosting pod "etcd-old-k8s-version-686713" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-686713" has status "Ready":"False"
	I0816 18:31:17.323623  495127 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-old-k8s-version-686713" in "kube-system" namespace to be "Ready" ...
	I0816 18:31:17.388803  495127 pod_ready.go:98] node "old-k8s-version-686713" hosting pod "kube-apiserver-old-k8s-version-686713" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-686713" has status "Ready":"False"
	I0816 18:31:17.388879  495127 pod_ready.go:82] duration metric: took 65.214196ms for pod "kube-apiserver-old-k8s-version-686713" in "kube-system" namespace to be "Ready" ...
	E0816 18:31:17.388903  495127 pod_ready.go:67] WaitExtra: waitPodCondition: node "old-k8s-version-686713" hosting pod "kube-apiserver-old-k8s-version-686713" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-686713" has status "Ready":"False"
	I0816 18:31:17.388921  495127 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-old-k8s-version-686713" in "kube-system" namespace to be "Ready" ...
	I0816 18:31:17.448717  495127 pod_ready.go:93] pod "kube-controller-manager-old-k8s-version-686713" in "kube-system" namespace has status "Ready":"True"
	I0816 18:31:17.448785  495127 pod_ready.go:82] duration metric: took 59.829179ms for pod "kube-controller-manager-old-k8s-version-686713" in "kube-system" namespace to be "Ready" ...
	I0816 18:31:17.448828  495127 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-d2sb2" in "kube-system" namespace to be "Ready" ...
	I0816 18:31:17.477382  495127 pod_ready.go:93] pod "kube-proxy-d2sb2" in "kube-system" namespace has status "Ready":"True"
	I0816 18:31:17.477448  495127 pod_ready.go:82] duration metric: took 28.598077ms for pod "kube-proxy-d2sb2" in "kube-system" namespace to be "Ready" ...
	I0816 18:31:17.477484  495127 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-old-k8s-version-686713" in "kube-system" namespace to be "Ready" ...
	I0816 18:31:19.490327  495127 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-686713" in "kube-system" namespace has status "Ready":"False"
	I0816 18:31:20.291324  495127 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: (13.628532886s)
	I0816 18:31:20.875825  495127 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (14.094354572s)
	I0816 18:31:20.876119  495127 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (11.558573809s)
	I0816 18:31:20.876165  495127 addons.go:475] Verifying addon metrics-server=true in "old-k8s-version-686713"
	I0816 18:31:20.876039  495127 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: (12.177728455s)
	I0816 18:31:20.877630  495127 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p old-k8s-version-686713 addons enable metrics-server
	
	I0816 18:31:20.879874  495127 out.go:177] * Enabled addons: default-storageclass, metrics-server, storage-provisioner, dashboard
	I0816 18:31:20.881680  495127 addons.go:510] duration metric: took 24.918418842s for enable addons: enabled=[default-storageclass metrics-server storage-provisioner dashboard]
	I0816 18:31:21.983923  495127 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-686713" in "kube-system" namespace has status "Ready":"False"
	I0816 18:31:23.986766  495127 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-686713" in "kube-system" namespace has status "Ready":"False"
	I0816 18:31:26.483403  495127 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-686713" in "kube-system" namespace has status "Ready":"False"
	I0816 18:31:28.483671  495127 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-686713" in "kube-system" namespace has status "Ready":"False"
	I0816 18:31:30.484463  495127 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-686713" in "kube-system" namespace has status "Ready":"False"
	I0816 18:31:32.984342  495127 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-686713" in "kube-system" namespace has status "Ready":"False"
	I0816 18:31:34.984385  495127 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-686713" in "kube-system" namespace has status "Ready":"False"
	I0816 18:31:36.993673  495127 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-686713" in "kube-system" namespace has status "Ready":"False"
	I0816 18:31:38.995095  495127 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-686713" in "kube-system" namespace has status "Ready":"False"
	I0816 18:31:41.484123  495127 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-686713" in "kube-system" namespace has status "Ready":"False"
	I0816 18:31:43.985157  495127 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-686713" in "kube-system" namespace has status "Ready":"False"
	I0816 18:31:46.484432  495127 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-686713" in "kube-system" namespace has status "Ready":"False"
	I0816 18:31:48.984087  495127 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-686713" in "kube-system" namespace has status "Ready":"False"
	I0816 18:31:51.484912  495127 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-686713" in "kube-system" namespace has status "Ready":"False"
	I0816 18:31:53.983426  495127 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-686713" in "kube-system" namespace has status "Ready":"False"
	I0816 18:31:55.984249  495127 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-686713" in "kube-system" namespace has status "Ready":"False"
	I0816 18:31:57.986632  495127 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-686713" in "kube-system" namespace has status "Ready":"False"
	I0816 18:32:00.486205  495127 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-686713" in "kube-system" namespace has status "Ready":"False"
	I0816 18:32:02.983331  495127 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-686713" in "kube-system" namespace has status "Ready":"False"
	I0816 18:32:04.985051  495127 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-686713" in "kube-system" namespace has status "Ready":"False"
	I0816 18:32:07.483754  495127 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-686713" in "kube-system" namespace has status "Ready":"False"
	I0816 18:32:09.483881  495127 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-686713" in "kube-system" namespace has status "Ready":"False"
	I0816 18:32:11.989322  495127 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-686713" in "kube-system" namespace has status "Ready":"False"
	I0816 18:32:14.486116  495127 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-686713" in "kube-system" namespace has status "Ready":"False"
	I0816 18:32:16.985122  495127 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-686713" in "kube-system" namespace has status "Ready":"False"
	I0816 18:32:19.486993  495127 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-686713" in "kube-system" namespace has status "Ready":"False"
	I0816 18:32:21.987073  495127 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-686713" in "kube-system" namespace has status "Ready":"False"
	I0816 18:32:24.484053  495127 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-686713" in "kube-system" namespace has status "Ready":"False"
	I0816 18:32:27.003579  495127 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-686713" in "kube-system" namespace has status "Ready":"False"
	I0816 18:32:29.485019  495127 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-686713" in "kube-system" namespace has status "Ready":"False"
	I0816 18:32:31.485119  495127 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-686713" in "kube-system" namespace has status "Ready":"False"
	I0816 18:32:33.486626  495127 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-686713" in "kube-system" namespace has status "Ready":"False"
	I0816 18:32:34.497593  495127 pod_ready.go:93] pod "kube-scheduler-old-k8s-version-686713" in "kube-system" namespace has status "Ready":"True"
	I0816 18:32:34.497621  495127 pod_ready.go:82] duration metric: took 1m17.02011637s for pod "kube-scheduler-old-k8s-version-686713" in "kube-system" namespace to be "Ready" ...
	I0816 18:32:34.497633  495127 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-9975d5f86-nj5gr" in "kube-system" namespace to be "Ready" ...
	I0816 18:32:36.503662  495127 pod_ready.go:103] pod "metrics-server-9975d5f86-nj5gr" in "kube-system" namespace has status "Ready":"False"
	I0816 18:32:38.503713  495127 pod_ready.go:103] pod "metrics-server-9975d5f86-nj5gr" in "kube-system" namespace has status "Ready":"False"
	I0816 18:32:40.504805  495127 pod_ready.go:103] pod "metrics-server-9975d5f86-nj5gr" in "kube-system" namespace has status "Ready":"False"
	I0816 18:32:43.006002  495127 pod_ready.go:103] pod "metrics-server-9975d5f86-nj5gr" in "kube-system" namespace has status "Ready":"False"
	I0816 18:32:45.009768  495127 pod_ready.go:103] pod "metrics-server-9975d5f86-nj5gr" in "kube-system" namespace has status "Ready":"False"
	I0816 18:32:47.504083  495127 pod_ready.go:103] pod "metrics-server-9975d5f86-nj5gr" in "kube-system" namespace has status "Ready":"False"
	I0816 18:32:50.013195  495127 pod_ready.go:103] pod "metrics-server-9975d5f86-nj5gr" in "kube-system" namespace has status "Ready":"False"
	I0816 18:32:52.503241  495127 pod_ready.go:103] pod "metrics-server-9975d5f86-nj5gr" in "kube-system" namespace has status "Ready":"False"
	I0816 18:32:54.503497  495127 pod_ready.go:103] pod "metrics-server-9975d5f86-nj5gr" in "kube-system" namespace has status "Ready":"False"
	I0816 18:32:56.506975  495127 pod_ready.go:103] pod "metrics-server-9975d5f86-nj5gr" in "kube-system" namespace has status "Ready":"False"
	I0816 18:32:59.013403  495127 pod_ready.go:103] pod "metrics-server-9975d5f86-nj5gr" in "kube-system" namespace has status "Ready":"False"
	I0816 18:33:01.503218  495127 pod_ready.go:103] pod "metrics-server-9975d5f86-nj5gr" in "kube-system" namespace has status "Ready":"False"
	I0816 18:33:03.506315  495127 pod_ready.go:103] pod "metrics-server-9975d5f86-nj5gr" in "kube-system" namespace has status "Ready":"False"
	I0816 18:33:05.508710  495127 pod_ready.go:103] pod "metrics-server-9975d5f86-nj5gr" in "kube-system" namespace has status "Ready":"False"
	I0816 18:33:08.009202  495127 pod_ready.go:103] pod "metrics-server-9975d5f86-nj5gr" in "kube-system" namespace has status "Ready":"False"
	I0816 18:33:10.503765  495127 pod_ready.go:103] pod "metrics-server-9975d5f86-nj5gr" in "kube-system" namespace has status "Ready":"False"
	I0816 18:33:12.504466  495127 pod_ready.go:103] pod "metrics-server-9975d5f86-nj5gr" in "kube-system" namespace has status "Ready":"False"
	I0816 18:33:15.020533  495127 pod_ready.go:103] pod "metrics-server-9975d5f86-nj5gr" in "kube-system" namespace has status "Ready":"False"
	I0816 18:33:17.503100  495127 pod_ready.go:103] pod "metrics-server-9975d5f86-nj5gr" in "kube-system" namespace has status "Ready":"False"
	I0816 18:33:19.503606  495127 pod_ready.go:103] pod "metrics-server-9975d5f86-nj5gr" in "kube-system" namespace has status "Ready":"False"
	I0816 18:33:21.503684  495127 pod_ready.go:103] pod "metrics-server-9975d5f86-nj5gr" in "kube-system" namespace has status "Ready":"False"
	I0816 18:33:23.504042  495127 pod_ready.go:103] pod "metrics-server-9975d5f86-nj5gr" in "kube-system" namespace has status "Ready":"False"
	I0816 18:33:26.011099  495127 pod_ready.go:103] pod "metrics-server-9975d5f86-nj5gr" in "kube-system" namespace has status "Ready":"False"
	I0816 18:33:28.504423  495127 pod_ready.go:103] pod "metrics-server-9975d5f86-nj5gr" in "kube-system" namespace has status "Ready":"False"
	I0816 18:33:31.008566  495127 pod_ready.go:103] pod "metrics-server-9975d5f86-nj5gr" in "kube-system" namespace has status "Ready":"False"
	I0816 18:33:33.503825  495127 pod_ready.go:103] pod "metrics-server-9975d5f86-nj5gr" in "kube-system" namespace has status "Ready":"False"
	I0816 18:33:35.504066  495127 pod_ready.go:103] pod "metrics-server-9975d5f86-nj5gr" in "kube-system" namespace has status "Ready":"False"
	I0816 18:33:38.008044  495127 pod_ready.go:103] pod "metrics-server-9975d5f86-nj5gr" in "kube-system" namespace has status "Ready":"False"
	I0816 18:33:40.503852  495127 pod_ready.go:103] pod "metrics-server-9975d5f86-nj5gr" in "kube-system" namespace has status "Ready":"False"
	I0816 18:33:42.504155  495127 pod_ready.go:103] pod "metrics-server-9975d5f86-nj5gr" in "kube-system" namespace has status "Ready":"False"
	I0816 18:33:44.516300  495127 pod_ready.go:103] pod "metrics-server-9975d5f86-nj5gr" in "kube-system" namespace has status "Ready":"False"
	I0816 18:33:47.004830  495127 pod_ready.go:103] pod "metrics-server-9975d5f86-nj5gr" in "kube-system" namespace has status "Ready":"False"
	I0816 18:33:49.006510  495127 pod_ready.go:103] pod "metrics-server-9975d5f86-nj5gr" in "kube-system" namespace has status "Ready":"False"
	I0816 18:33:51.008333  495127 pod_ready.go:103] pod "metrics-server-9975d5f86-nj5gr" in "kube-system" namespace has status "Ready":"False"
	I0816 18:33:53.009520  495127 pod_ready.go:103] pod "metrics-server-9975d5f86-nj5gr" in "kube-system" namespace has status "Ready":"False"
	I0816 18:33:55.011882  495127 pod_ready.go:103] pod "metrics-server-9975d5f86-nj5gr" in "kube-system" namespace has status "Ready":"False"
	I0816 18:33:57.018855  495127 pod_ready.go:103] pod "metrics-server-9975d5f86-nj5gr" in "kube-system" namespace has status "Ready":"False"
	I0816 18:33:59.503301  495127 pod_ready.go:103] pod "metrics-server-9975d5f86-nj5gr" in "kube-system" namespace has status "Ready":"False"
	I0816 18:34:01.505728  495127 pod_ready.go:103] pod "metrics-server-9975d5f86-nj5gr" in "kube-system" namespace has status "Ready":"False"
	I0816 18:34:04.010227  495127 pod_ready.go:103] pod "metrics-server-9975d5f86-nj5gr" in "kube-system" namespace has status "Ready":"False"
	I0816 18:34:06.013732  495127 pod_ready.go:103] pod "metrics-server-9975d5f86-nj5gr" in "kube-system" namespace has status "Ready":"False"
	I0816 18:34:08.504270  495127 pod_ready.go:103] pod "metrics-server-9975d5f86-nj5gr" in "kube-system" namespace has status "Ready":"False"
	I0816 18:34:11.012973  495127 pod_ready.go:103] pod "metrics-server-9975d5f86-nj5gr" in "kube-system" namespace has status "Ready":"False"
	I0816 18:34:13.016074  495127 pod_ready.go:103] pod "metrics-server-9975d5f86-nj5gr" in "kube-system" namespace has status "Ready":"False"
	I0816 18:34:15.504025  495127 pod_ready.go:103] pod "metrics-server-9975d5f86-nj5gr" in "kube-system" namespace has status "Ready":"False"
	I0816 18:34:18.019048  495127 pod_ready.go:103] pod "metrics-server-9975d5f86-nj5gr" in "kube-system" namespace has status "Ready":"False"
	I0816 18:34:20.503336  495127 pod_ready.go:103] pod "metrics-server-9975d5f86-nj5gr" in "kube-system" namespace has status "Ready":"False"
	I0816 18:34:22.504259  495127 pod_ready.go:103] pod "metrics-server-9975d5f86-nj5gr" in "kube-system" namespace has status "Ready":"False"
	I0816 18:34:25.019283  495127 pod_ready.go:103] pod "metrics-server-9975d5f86-nj5gr" in "kube-system" namespace has status "Ready":"False"
	I0816 18:34:27.503602  495127 pod_ready.go:103] pod "metrics-server-9975d5f86-nj5gr" in "kube-system" namespace has status "Ready":"False"
	I0816 18:34:29.503939  495127 pod_ready.go:103] pod "metrics-server-9975d5f86-nj5gr" in "kube-system" namespace has status "Ready":"False"
	I0816 18:34:32.005994  495127 pod_ready.go:103] pod "metrics-server-9975d5f86-nj5gr" in "kube-system" namespace has status "Ready":"False"
	I0816 18:34:34.014654  495127 pod_ready.go:103] pod "metrics-server-9975d5f86-nj5gr" in "kube-system" namespace has status "Ready":"False"
	I0816 18:34:36.503586  495127 pod_ready.go:103] pod "metrics-server-9975d5f86-nj5gr" in "kube-system" namespace has status "Ready":"False"
	I0816 18:34:38.504354  495127 pod_ready.go:103] pod "metrics-server-9975d5f86-nj5gr" in "kube-system" namespace has status "Ready":"False"
	I0816 18:34:40.504518  495127 pod_ready.go:103] pod "metrics-server-9975d5f86-nj5gr" in "kube-system" namespace has status "Ready":"False"
	I0816 18:34:43.009520  495127 pod_ready.go:103] pod "metrics-server-9975d5f86-nj5gr" in "kube-system" namespace has status "Ready":"False"
	I0816 18:34:45.021595  495127 pod_ready.go:103] pod "metrics-server-9975d5f86-nj5gr" in "kube-system" namespace has status "Ready":"False"
	I0816 18:34:47.504488  495127 pod_ready.go:103] pod "metrics-server-9975d5f86-nj5gr" in "kube-system" namespace has status "Ready":"False"
	I0816 18:34:50.018407  495127 pod_ready.go:103] pod "metrics-server-9975d5f86-nj5gr" in "kube-system" namespace has status "Ready":"False"
	I0816 18:34:52.503516  495127 pod_ready.go:103] pod "metrics-server-9975d5f86-nj5gr" in "kube-system" namespace has status "Ready":"False"
	I0816 18:34:54.504236  495127 pod_ready.go:103] pod "metrics-server-9975d5f86-nj5gr" in "kube-system" namespace has status "Ready":"False"
	I0816 18:34:57.009944  495127 pod_ready.go:103] pod "metrics-server-9975d5f86-nj5gr" in "kube-system" namespace has status "Ready":"False"
	I0816 18:34:59.504022  495127 pod_ready.go:103] pod "metrics-server-9975d5f86-nj5gr" in "kube-system" namespace has status "Ready":"False"
	I0816 18:35:02.011099  495127 pod_ready.go:103] pod "metrics-server-9975d5f86-nj5gr" in "kube-system" namespace has status "Ready":"False"
	I0816 18:35:04.507751  495127 pod_ready.go:103] pod "metrics-server-9975d5f86-nj5gr" in "kube-system" namespace has status "Ready":"False"
	I0816 18:35:07.005729  495127 pod_ready.go:103] pod "metrics-server-9975d5f86-nj5gr" in "kube-system" namespace has status "Ready":"False"
	I0816 18:35:09.008704  495127 pod_ready.go:103] pod "metrics-server-9975d5f86-nj5gr" in "kube-system" namespace has status "Ready":"False"
	I0816 18:35:11.015587  495127 pod_ready.go:103] pod "metrics-server-9975d5f86-nj5gr" in "kube-system" namespace has status "Ready":"False"
	I0816 18:35:13.504231  495127 pod_ready.go:103] pod "metrics-server-9975d5f86-nj5gr" in "kube-system" namespace has status "Ready":"False"
	I0816 18:35:15.504351  495127 pod_ready.go:103] pod "metrics-server-9975d5f86-nj5gr" in "kube-system" namespace has status "Ready":"False"
	I0816 18:35:18.013770  495127 pod_ready.go:103] pod "metrics-server-9975d5f86-nj5gr" in "kube-system" namespace has status "Ready":"False"
	I0816 18:35:20.018309  495127 pod_ready.go:103] pod "metrics-server-9975d5f86-nj5gr" in "kube-system" namespace has status "Ready":"False"
	I0816 18:35:22.503195  495127 pod_ready.go:103] pod "metrics-server-9975d5f86-nj5gr" in "kube-system" namespace has status "Ready":"False"
	I0816 18:35:24.503873  495127 pod_ready.go:103] pod "metrics-server-9975d5f86-nj5gr" in "kube-system" namespace has status "Ready":"False"
	I0816 18:35:27.006392  495127 pod_ready.go:103] pod "metrics-server-9975d5f86-nj5gr" in "kube-system" namespace has status "Ready":"False"
	I0816 18:35:29.504206  495127 pod_ready.go:103] pod "metrics-server-9975d5f86-nj5gr" in "kube-system" namespace has status "Ready":"False"
	I0816 18:35:31.508570  495127 pod_ready.go:103] pod "metrics-server-9975d5f86-nj5gr" in "kube-system" namespace has status "Ready":"False"
	I0816 18:35:34.005354  495127 pod_ready.go:103] pod "metrics-server-9975d5f86-nj5gr" in "kube-system" namespace has status "Ready":"False"
	I0816 18:35:36.016860  495127 pod_ready.go:103] pod "metrics-server-9975d5f86-nj5gr" in "kube-system" namespace has status "Ready":"False"
	I0816 18:35:38.503574  495127 pod_ready.go:103] pod "metrics-server-9975d5f86-nj5gr" in "kube-system" namespace has status "Ready":"False"
	I0816 18:35:40.504584  495127 pod_ready.go:103] pod "metrics-server-9975d5f86-nj5gr" in "kube-system" namespace has status "Ready":"False"
	I0816 18:35:43.008059  495127 pod_ready.go:103] pod "metrics-server-9975d5f86-nj5gr" in "kube-system" namespace has status "Ready":"False"
	I0816 18:35:45.022629  495127 pod_ready.go:103] pod "metrics-server-9975d5f86-nj5gr" in "kube-system" namespace has status "Ready":"False"
	I0816 18:35:47.506448  495127 pod_ready.go:103] pod "metrics-server-9975d5f86-nj5gr" in "kube-system" namespace has status "Ready":"False"
	I0816 18:35:50.007231  495127 pod_ready.go:103] pod "metrics-server-9975d5f86-nj5gr" in "kube-system" namespace has status "Ready":"False"
	I0816 18:35:52.011968  495127 pod_ready.go:103] pod "metrics-server-9975d5f86-nj5gr" in "kube-system" namespace has status "Ready":"False"
	I0816 18:35:54.504189  495127 pod_ready.go:103] pod "metrics-server-9975d5f86-nj5gr" in "kube-system" namespace has status "Ready":"False"
	I0816 18:35:57.005149  495127 pod_ready.go:103] pod "metrics-server-9975d5f86-nj5gr" in "kube-system" namespace has status "Ready":"False"
	I0816 18:35:59.009474  495127 pod_ready.go:103] pod "metrics-server-9975d5f86-nj5gr" in "kube-system" namespace has status "Ready":"False"
	I0816 18:36:01.503876  495127 pod_ready.go:103] pod "metrics-server-9975d5f86-nj5gr" in "kube-system" namespace has status "Ready":"False"
	I0816 18:36:03.504132  495127 pod_ready.go:103] pod "metrics-server-9975d5f86-nj5gr" in "kube-system" namespace has status "Ready":"False"
	I0816 18:36:06.011907  495127 pod_ready.go:103] pod "metrics-server-9975d5f86-nj5gr" in "kube-system" namespace has status "Ready":"False"
	I0816 18:36:08.504002  495127 pod_ready.go:103] pod "metrics-server-9975d5f86-nj5gr" in "kube-system" namespace has status "Ready":"False"
	I0816 18:36:11.006711  495127 pod_ready.go:103] pod "metrics-server-9975d5f86-nj5gr" in "kube-system" namespace has status "Ready":"False"
	I0816 18:36:13.012853  495127 pod_ready.go:103] pod "metrics-server-9975d5f86-nj5gr" in "kube-system" namespace has status "Ready":"False"
	I0816 18:36:15.032209  495127 pod_ready.go:103] pod "metrics-server-9975d5f86-nj5gr" in "kube-system" namespace has status "Ready":"False"
	I0816 18:36:17.503705  495127 pod_ready.go:103] pod "metrics-server-9975d5f86-nj5gr" in "kube-system" namespace has status "Ready":"False"
	I0816 18:36:19.503829  495127 pod_ready.go:103] pod "metrics-server-9975d5f86-nj5gr" in "kube-system" namespace has status "Ready":"False"
	I0816 18:36:22.019847  495127 pod_ready.go:103] pod "metrics-server-9975d5f86-nj5gr" in "kube-system" namespace has status "Ready":"False"
	I0816 18:36:24.509716  495127 pod_ready.go:103] pod "metrics-server-9975d5f86-nj5gr" in "kube-system" namespace has status "Ready":"False"
	I0816 18:36:27.007056  495127 pod_ready.go:103] pod "metrics-server-9975d5f86-nj5gr" in "kube-system" namespace has status "Ready":"False"
	I0816 18:36:29.008486  495127 pod_ready.go:103] pod "metrics-server-9975d5f86-nj5gr" in "kube-system" namespace has status "Ready":"False"
	I0816 18:36:31.503551  495127 pod_ready.go:103] pod "metrics-server-9975d5f86-nj5gr" in "kube-system" namespace has status "Ready":"False"
	I0816 18:36:33.504092  495127 pod_ready.go:103] pod "metrics-server-9975d5f86-nj5gr" in "kube-system" namespace has status "Ready":"False"
	I0816 18:36:34.504048  495127 pod_ready.go:82] duration metric: took 4m0.006400257s for pod "metrics-server-9975d5f86-nj5gr" in "kube-system" namespace to be "Ready" ...
	E0816 18:36:34.504077  495127 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I0816 18:36:34.504087  495127 pod_ready.go:39] duration metric: took 5m18.100155194s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 18:36:34.504100  495127 api_server.go:52] waiting for apiserver process to appear ...
	I0816 18:36:34.504129  495127 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:36:34.504194  495127 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:36:34.558761  495127 cri.go:89] found id: "c9fb62cfb5636d23edd12742b0d3803abb87a3cecdf490382bebadcb3f66804d"
	I0816 18:36:34.558848  495127 cri.go:89] found id: "447b982d5e55ff3f7c2e0daf7eb85d57cc3778df20255ada1e82db270bb3c0ef"
	I0816 18:36:34.558862  495127 cri.go:89] found id: ""
	I0816 18:36:34.558870  495127 logs.go:276] 2 containers: [c9fb62cfb5636d23edd12742b0d3803abb87a3cecdf490382bebadcb3f66804d 447b982d5e55ff3f7c2e0daf7eb85d57cc3778df20255ada1e82db270bb3c0ef]
	I0816 18:36:34.558928  495127 ssh_runner.go:195] Run: which crictl
	I0816 18:36:34.562601  495127 ssh_runner.go:195] Run: which crictl
	I0816 18:36:34.565956  495127 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0816 18:36:34.566050  495127 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:36:34.609534  495127 cri.go:89] found id: "2f9ca8dbad25d5c189e46bfb5b1c0fb6ba3bfe49b1436a6da5f276e21a4bf6e9"
	I0816 18:36:34.609555  495127 cri.go:89] found id: "823d92b3aa088f0ae2ac13e28669fcb44185774a0ec41c94a4f70cb9d841f330"
	I0816 18:36:34.609559  495127 cri.go:89] found id: ""
	I0816 18:36:34.609567  495127 logs.go:276] 2 containers: [2f9ca8dbad25d5c189e46bfb5b1c0fb6ba3bfe49b1436a6da5f276e21a4bf6e9 823d92b3aa088f0ae2ac13e28669fcb44185774a0ec41c94a4f70cb9d841f330]
	I0816 18:36:34.609622  495127 ssh_runner.go:195] Run: which crictl
	I0816 18:36:34.613139  495127 ssh_runner.go:195] Run: which crictl
	I0816 18:36:34.616450  495127 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0816 18:36:34.616534  495127 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:36:34.661405  495127 cri.go:89] found id: "e1b93070d9316ad4002ac51e29d82e3750f6b8571f5b5fd96d31020533ec28df"
	I0816 18:36:34.661430  495127 cri.go:89] found id: "fd426e802a394dd085a78eceaa9e1ef8b8a1729bd688f37f444eee97c2e33625"
	I0816 18:36:34.661435  495127 cri.go:89] found id: ""
	I0816 18:36:34.661441  495127 logs.go:276] 2 containers: [e1b93070d9316ad4002ac51e29d82e3750f6b8571f5b5fd96d31020533ec28df fd426e802a394dd085a78eceaa9e1ef8b8a1729bd688f37f444eee97c2e33625]
	I0816 18:36:34.661507  495127 ssh_runner.go:195] Run: which crictl
	I0816 18:36:34.666640  495127 ssh_runner.go:195] Run: which crictl
	I0816 18:36:34.670059  495127 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:36:34.670142  495127 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:36:34.706771  495127 cri.go:89] found id: "67a319b21b20b2c80ae76382057598c268c780a5584c0a6774faabede5895141"
	I0816 18:36:34.706793  495127 cri.go:89] found id: "b5415dfeafde650399f65c8eaa6780576175b5a6a7f3c05fe903fc0ab1c7752b"
	I0816 18:36:34.706798  495127 cri.go:89] found id: ""
	I0816 18:36:34.706808  495127 logs.go:276] 2 containers: [67a319b21b20b2c80ae76382057598c268c780a5584c0a6774faabede5895141 b5415dfeafde650399f65c8eaa6780576175b5a6a7f3c05fe903fc0ab1c7752b]
	I0816 18:36:34.706865  495127 ssh_runner.go:195] Run: which crictl
	I0816 18:36:34.710727  495127 ssh_runner.go:195] Run: which crictl
	I0816 18:36:34.713974  495127 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:36:34.714083  495127 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:36:34.761791  495127 cri.go:89] found id: "70e77ee9f2cac8189460718220ff38a2c7e68efb6c2b3f9e6e0eed05f1567513"
	I0816 18:36:34.761855  495127 cri.go:89] found id: "c8808a95b766ea5da16f28d106fac8ed477a20a335d64d6f24a0d16441629040"
	I0816 18:36:34.761874  495127 cri.go:89] found id: ""
	I0816 18:36:34.761903  495127 logs.go:276] 2 containers: [70e77ee9f2cac8189460718220ff38a2c7e68efb6c2b3f9e6e0eed05f1567513 c8808a95b766ea5da16f28d106fac8ed477a20a335d64d6f24a0d16441629040]
	I0816 18:36:34.761974  495127 ssh_runner.go:195] Run: which crictl
	I0816 18:36:34.765559  495127 ssh_runner.go:195] Run: which crictl
	I0816 18:36:34.768587  495127 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:36:34.768682  495127 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:36:34.810012  495127 cri.go:89] found id: "6dc4fb429749dff987e078bd740c3d8ac34c76edbcf81d202b385019f44254e9"
	I0816 18:36:34.810038  495127 cri.go:89] found id: "78be83580cf2485e2d5a7a049e4681acf055c50835844bb627f268dc3bd5943b"
	I0816 18:36:34.810044  495127 cri.go:89] found id: ""
	I0816 18:36:34.810051  495127 logs.go:276] 2 containers: [6dc4fb429749dff987e078bd740c3d8ac34c76edbcf81d202b385019f44254e9 78be83580cf2485e2d5a7a049e4681acf055c50835844bb627f268dc3bd5943b]
	I0816 18:36:34.810133  495127 ssh_runner.go:195] Run: which crictl
	I0816 18:36:34.813938  495127 ssh_runner.go:195] Run: which crictl
	I0816 18:36:34.817131  495127 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0816 18:36:34.817196  495127 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:36:34.854577  495127 cri.go:89] found id: "ce04226e67d7d9412dc331daf4c7ee98a4483573ef231865fedf2e1017b08df0"
	I0816 18:36:34.854599  495127 cri.go:89] found id: "b1c5cc5a08d84ccf10b870b1da860bd7634039448156c7111a600e3a79596432"
	I0816 18:36:34.854604  495127 cri.go:89] found id: ""
	I0816 18:36:34.854612  495127 logs.go:276] 2 containers: [ce04226e67d7d9412dc331daf4c7ee98a4483573ef231865fedf2e1017b08df0 b1c5cc5a08d84ccf10b870b1da860bd7634039448156c7111a600e3a79596432]
	I0816 18:36:34.854668  495127 ssh_runner.go:195] Run: which crictl
	I0816 18:36:34.858580  495127 ssh_runner.go:195] Run: which crictl
	I0816 18:36:34.861849  495127 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0816 18:36:34.861923  495127 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0816 18:36:34.908546  495127 cri.go:89] found id: "83541051f882d1bb2230d26bbe44c443d1ed022d11ceafd571de268cfbdca830"
	I0816 18:36:34.908567  495127 cri.go:89] found id: "761eeb094020ad669cd5e4d3c642fc27e22a7c5c1ca7e43cf3f2bc72577e44d3"
	I0816 18:36:34.908572  495127 cri.go:89] found id: ""
	I0816 18:36:34.908579  495127 logs.go:276] 2 containers: [83541051f882d1bb2230d26bbe44c443d1ed022d11ceafd571de268cfbdca830 761eeb094020ad669cd5e4d3c642fc27e22a7c5c1ca7e43cf3f2bc72577e44d3]
	I0816 18:36:34.908663  495127 ssh_runner.go:195] Run: which crictl
	I0816 18:36:34.912184  495127 ssh_runner.go:195] Run: which crictl
	I0816 18:36:34.915626  495127 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 18:36:34.915713  495127 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 18:36:34.960683  495127 cri.go:89] found id: "3946f4bb419f390836cae4a5d5344626f7d8e29a3e27fdeadd245e6bebad7503"
	I0816 18:36:34.960704  495127 cri.go:89] found id: ""
	I0816 18:36:34.960711  495127 logs.go:276] 1 containers: [3946f4bb419f390836cae4a5d5344626f7d8e29a3e27fdeadd245e6bebad7503]
	I0816 18:36:34.960796  495127 ssh_runner.go:195] Run: which crictl
	I0816 18:36:34.964929  495127 logs.go:123] Gathering logs for storage-provisioner [761eeb094020ad669cd5e4d3c642fc27e22a7c5c1ca7e43cf3f2bc72577e44d3] ...
	I0816 18:36:34.964956  495127 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 761eeb094020ad669cd5e4d3c642fc27e22a7c5c1ca7e43cf3f2bc72577e44d3"
	I0816 18:36:35.003329  495127 logs.go:123] Gathering logs for kubernetes-dashboard [3946f4bb419f390836cae4a5d5344626f7d8e29a3e27fdeadd245e6bebad7503] ...
	I0816 18:36:35.003358  495127 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3946f4bb419f390836cae4a5d5344626f7d8e29a3e27fdeadd245e6bebad7503"
	I0816 18:36:35.051035  495127 logs.go:123] Gathering logs for kube-apiserver [447b982d5e55ff3f7c2e0daf7eb85d57cc3778df20255ada1e82db270bb3c0ef] ...
	I0816 18:36:35.051064  495127 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 447b982d5e55ff3f7c2e0daf7eb85d57cc3778df20255ada1e82db270bb3c0ef"
	I0816 18:36:35.120859  495127 logs.go:123] Gathering logs for coredns [e1b93070d9316ad4002ac51e29d82e3750f6b8571f5b5fd96d31020533ec28df] ...
	I0816 18:36:35.120941  495127 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e1b93070d9316ad4002ac51e29d82e3750f6b8571f5b5fd96d31020533ec28df"
	I0816 18:36:35.168536  495127 logs.go:123] Gathering logs for kube-scheduler [67a319b21b20b2c80ae76382057598c268c780a5584c0a6774faabede5895141] ...
	I0816 18:36:35.168566  495127 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 67a319b21b20b2c80ae76382057598c268c780a5584c0a6774faabede5895141"
	I0816 18:36:35.207430  495127 logs.go:123] Gathering logs for kube-controller-manager [6dc4fb429749dff987e078bd740c3d8ac34c76edbcf81d202b385019f44254e9] ...
	I0816 18:36:35.207457  495127 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6dc4fb429749dff987e078bd740c3d8ac34c76edbcf81d202b385019f44254e9"
	I0816 18:36:35.267944  495127 logs.go:123] Gathering logs for container status ...
	I0816 18:36:35.267974  495127 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:36:35.323332  495127 logs.go:123] Gathering logs for kube-apiserver [c9fb62cfb5636d23edd12742b0d3803abb87a3cecdf490382bebadcb3f66804d] ...
	I0816 18:36:35.323400  495127 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c9fb62cfb5636d23edd12742b0d3803abb87a3cecdf490382bebadcb3f66804d"
	I0816 18:36:35.388177  495127 logs.go:123] Gathering logs for kube-proxy [c8808a95b766ea5da16f28d106fac8ed477a20a335d64d6f24a0d16441629040] ...
	I0816 18:36:35.388208  495127 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c8808a95b766ea5da16f28d106fac8ed477a20a335d64d6f24a0d16441629040"
	I0816 18:36:35.429532  495127 logs.go:123] Gathering logs for storage-provisioner [83541051f882d1bb2230d26bbe44c443d1ed022d11ceafd571de268cfbdca830] ...
	I0816 18:36:35.429563  495127 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 83541051f882d1bb2230d26bbe44c443d1ed022d11ceafd571de268cfbdca830"
	I0816 18:36:35.470193  495127 logs.go:123] Gathering logs for containerd ...
	I0816 18:36:35.470219  495127 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0816 18:36:35.529428  495127 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:36:35.529461  495127 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0816 18:36:35.760520  495127 logs.go:123] Gathering logs for etcd [823d92b3aa088f0ae2ac13e28669fcb44185774a0ec41c94a4f70cb9d841f330] ...
	I0816 18:36:35.760559  495127 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 823d92b3aa088f0ae2ac13e28669fcb44185774a0ec41c94a4f70cb9d841f330"
	I0816 18:36:35.834932  495127 logs.go:123] Gathering logs for coredns [fd426e802a394dd085a78eceaa9e1ef8b8a1729bd688f37f444eee97c2e33625] ...
	I0816 18:36:35.834969  495127 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fd426e802a394dd085a78eceaa9e1ef8b8a1729bd688f37f444eee97c2e33625"
	I0816 18:36:35.893665  495127 logs.go:123] Gathering logs for kindnet [b1c5cc5a08d84ccf10b870b1da860bd7634039448156c7111a600e3a79596432] ...
	I0816 18:36:35.893693  495127 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b1c5cc5a08d84ccf10b870b1da860bd7634039448156c7111a600e3a79596432"
	I0816 18:36:35.944671  495127 logs.go:123] Gathering logs for kube-proxy [70e77ee9f2cac8189460718220ff38a2c7e68efb6c2b3f9e6e0eed05f1567513] ...
	I0816 18:36:35.944705  495127 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 70e77ee9f2cac8189460718220ff38a2c7e68efb6c2b3f9e6e0eed05f1567513"
	I0816 18:36:36.066202  495127 logs.go:123] Gathering logs for kube-controller-manager [78be83580cf2485e2d5a7a049e4681acf055c50835844bb627f268dc3bd5943b] ...
	I0816 18:36:36.066231  495127 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 78be83580cf2485e2d5a7a049e4681acf055c50835844bb627f268dc3bd5943b"
	I0816 18:36:36.211100  495127 logs.go:123] Gathering logs for kindnet [ce04226e67d7d9412dc331daf4c7ee98a4483573ef231865fedf2e1017b08df0] ...
	I0816 18:36:36.211136  495127 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ce04226e67d7d9412dc331daf4c7ee98a4483573ef231865fedf2e1017b08df0"
	I0816 18:36:36.269208  495127 logs.go:123] Gathering logs for kubelet ...
	I0816 18:36:36.269246  495127 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0816 18:36:36.328224  495127 logs.go:138] Found kubelet problem: Aug 16 18:31:19 old-k8s-version-686713 kubelet[661]: E0816 18:31:19.709619     661 pod_workers.go:191] Error syncing pod 46ffaa3d-9364-4297-9676-956836cedbf2 ("metrics-server-9975d5f86-nj5gr_kube-system(46ffaa3d-9364-4297-9676-956836cedbf2)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W0816 18:36:36.328432  495127 logs.go:138] Found kubelet problem: Aug 16 18:31:20 old-k8s-version-686713 kubelet[661]: E0816 18:31:20.578848     661 pod_workers.go:191] Error syncing pod 46ffaa3d-9364-4297-9676-956836cedbf2 ("metrics-server-9975d5f86-nj5gr_kube-system(46ffaa3d-9364-4297-9676-956836cedbf2)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0816 18:36:36.331793  495127 logs.go:138] Found kubelet problem: Aug 16 18:31:35 old-k8s-version-686713 kubelet[661]: E0816 18:31:35.964767     661 pod_workers.go:191] Error syncing pod 46ffaa3d-9364-4297-9676-956836cedbf2 ("metrics-server-9975d5f86-nj5gr_kube-system(46ffaa3d-9364-4297-9676-956836cedbf2)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W0816 18:36:36.333623  495127 logs.go:138] Found kubelet problem: Aug 16 18:31:42 old-k8s-version-686713 kubelet[661]: E0816 18:31:42.681634     661 pod_workers.go:191] Error syncing pod 7889e018-f267-47f4-beb4-d17b5643fc87 ("dashboard-metrics-scraper-8d5bb5db8-tkzd5_kubernetes-dashboard(7889e018-f267-47f4-beb4-d17b5643fc87)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-tkzd5_kubernetes-dashboard(7889e018-f267-47f4-beb4-d17b5643fc87)"
	W0816 18:36:36.333989  495127 logs.go:138] Found kubelet problem: Aug 16 18:31:43 old-k8s-version-686713 kubelet[661]: E0816 18:31:43.679894     661 pod_workers.go:191] Error syncing pod 7889e018-f267-47f4-beb4-d17b5643fc87 ("dashboard-metrics-scraper-8d5bb5db8-tkzd5_kubernetes-dashboard(7889e018-f267-47f4-beb4-d17b5643fc87)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-tkzd5_kubernetes-dashboard(7889e018-f267-47f4-beb4-d17b5643fc87)"
	W0816 18:36:36.334682  495127 logs.go:138] Found kubelet problem: Aug 16 18:31:47 old-k8s-version-686713 kubelet[661]: E0816 18:31:47.649890     661 pod_workers.go:191] Error syncing pod 7889e018-f267-47f4-beb4-d17b5643fc87 ("dashboard-metrics-scraper-8d5bb5db8-tkzd5_kubernetes-dashboard(7889e018-f267-47f4-beb4-d17b5643fc87)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-tkzd5_kubernetes-dashboard(7889e018-f267-47f4-beb4-d17b5643fc87)"
	W0816 18:36:36.334901  495127 logs.go:138] Found kubelet problem: Aug 16 18:31:47 old-k8s-version-686713 kubelet[661]: E0816 18:31:47.950964     661 pod_workers.go:191] Error syncing pod 46ffaa3d-9364-4297-9676-956836cedbf2 ("metrics-server-9975d5f86-nj5gr_kube-system(46ffaa3d-9364-4297-9676-956836cedbf2)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0816 18:36:36.335363  495127 logs.go:138] Found kubelet problem: Aug 16 18:31:51 old-k8s-version-686713 kubelet[661]: E0816 18:31:51.702800     661 pod_workers.go:191] Error syncing pod 657e9855-bf06-453c-b32b-8665ce255ff7 ("storage-provisioner_kube-system(657e9855-bf06-453c-b32b-8665ce255ff7)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(657e9855-bf06-453c-b32b-8665ce255ff7)"
	W0816 18:36:36.336339  495127 logs.go:138] Found kubelet problem: Aug 16 18:31:59 old-k8s-version-686713 kubelet[661]: E0816 18:31:59.727646     661 pod_workers.go:191] Error syncing pod 7889e018-f267-47f4-beb4-d17b5643fc87 ("dashboard-metrics-scraper-8d5bb5db8-tkzd5_kubernetes-dashboard(7889e018-f267-47f4-beb4-d17b5643fc87)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-tkzd5_kubernetes-dashboard(7889e018-f267-47f4-beb4-d17b5643fc87)"
	W0816 18:36:36.338907  495127 logs.go:138] Found kubelet problem: Aug 16 18:32:00 old-k8s-version-686713 kubelet[661]: E0816 18:32:00.969457     661 pod_workers.go:191] Error syncing pod 46ffaa3d-9364-4297-9676-956836cedbf2 ("metrics-server-9975d5f86-nj5gr_kube-system(46ffaa3d-9364-4297-9676-956836cedbf2)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W0816 18:36:36.339418  495127 logs.go:138] Found kubelet problem: Aug 16 18:32:07 old-k8s-version-686713 kubelet[661]: E0816 18:32:07.650284     661 pod_workers.go:191] Error syncing pod 7889e018-f267-47f4-beb4-d17b5643fc87 ("dashboard-metrics-scraper-8d5bb5db8-tkzd5_kubernetes-dashboard(7889e018-f267-47f4-beb4-d17b5643fc87)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-tkzd5_kubernetes-dashboard(7889e018-f267-47f4-beb4-d17b5643fc87)"
	W0816 18:36:36.339626  495127 logs.go:138] Found kubelet problem: Aug 16 18:32:11 old-k8s-version-686713 kubelet[661]: E0816 18:32:11.950723     661 pod_workers.go:191] Error syncing pod 46ffaa3d-9364-4297-9676-956836cedbf2 ("metrics-server-9975d5f86-nj5gr_kube-system(46ffaa3d-9364-4297-9676-956836cedbf2)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0816 18:36:36.340241  495127 logs.go:138] Found kubelet problem: Aug 16 18:32:22 old-k8s-version-686713 kubelet[661]: E0816 18:32:22.799899     661 pod_workers.go:191] Error syncing pod 7889e018-f267-47f4-beb4-d17b5643fc87 ("dashboard-metrics-scraper-8d5bb5db8-tkzd5_kubernetes-dashboard(7889e018-f267-47f4-beb4-d17b5643fc87)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-tkzd5_kubernetes-dashboard(7889e018-f267-47f4-beb4-d17b5643fc87)"
	W0816 18:36:36.340483  495127 logs.go:138] Found kubelet problem: Aug 16 18:32:26 old-k8s-version-686713 kubelet[661]: E0816 18:32:26.950439     661 pod_workers.go:191] Error syncing pod 46ffaa3d-9364-4297-9676-956836cedbf2 ("metrics-server-9975d5f86-nj5gr_kube-system(46ffaa3d-9364-4297-9676-956836cedbf2)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0816 18:36:36.340837  495127 logs.go:138] Found kubelet problem: Aug 16 18:32:27 old-k8s-version-686713 kubelet[661]: E0816 18:32:27.650291     661 pod_workers.go:191] Error syncing pod 7889e018-f267-47f4-beb4-d17b5643fc87 ("dashboard-metrics-scraper-8d5bb5db8-tkzd5_kubernetes-dashboard(7889e018-f267-47f4-beb4-d17b5643fc87)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-tkzd5_kubernetes-dashboard(7889e018-f267-47f4-beb4-d17b5643fc87)"
	W0816 18:36:36.341218  495127 logs.go:138] Found kubelet problem: Aug 16 18:32:38 old-k8s-version-686713 kubelet[661]: E0816 18:32:38.950175     661 pod_workers.go:191] Error syncing pod 7889e018-f267-47f4-beb4-d17b5643fc87 ("dashboard-metrics-scraper-8d5bb5db8-tkzd5_kubernetes-dashboard(7889e018-f267-47f4-beb4-d17b5643fc87)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-tkzd5_kubernetes-dashboard(7889e018-f267-47f4-beb4-d17b5643fc87)"
	W0816 18:36:36.341427  495127 logs.go:138] Found kubelet problem: Aug 16 18:32:39 old-k8s-version-686713 kubelet[661]: E0816 18:32:39.953204     661 pod_workers.go:191] Error syncing pod 46ffaa3d-9364-4297-9676-956836cedbf2 ("metrics-server-9975d5f86-nj5gr_kube-system(46ffaa3d-9364-4297-9676-956836cedbf2)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0816 18:36:36.341783  495127 logs.go:138] Found kubelet problem: Aug 16 18:32:51 old-k8s-version-686713 kubelet[661]: E0816 18:32:51.951277     661 pod_workers.go:191] Error syncing pod 7889e018-f267-47f4-beb4-d17b5643fc87 ("dashboard-metrics-scraper-8d5bb5db8-tkzd5_kubernetes-dashboard(7889e018-f267-47f4-beb4-d17b5643fc87)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-tkzd5_kubernetes-dashboard(7889e018-f267-47f4-beb4-d17b5643fc87)"
	W0816 18:36:36.344264  495127 logs.go:138] Found kubelet problem: Aug 16 18:32:51 old-k8s-version-686713 kubelet[661]: E0816 18:32:51.968189     661 pod_workers.go:191] Error syncing pod 46ffaa3d-9364-4297-9676-956836cedbf2 ("metrics-server-9975d5f86-nj5gr_kube-system(46ffaa3d-9364-4297-9676-956836cedbf2)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W0816 18:36:36.344884  495127 logs.go:138] Found kubelet problem: Aug 16 18:33:05 old-k8s-version-686713 kubelet[661]: E0816 18:33:05.923756     661 pod_workers.go:191] Error syncing pod 7889e018-f267-47f4-beb4-d17b5643fc87 ("dashboard-metrics-scraper-8d5bb5db8-tkzd5_kubernetes-dashboard(7889e018-f267-47f4-beb4-d17b5643fc87)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-tkzd5_kubernetes-dashboard(7889e018-f267-47f4-beb4-d17b5643fc87)"
	W0816 18:36:36.345092  495127 logs.go:138] Found kubelet problem: Aug 16 18:33:06 old-k8s-version-686713 kubelet[661]: E0816 18:33:06.950498     661 pod_workers.go:191] Error syncing pod 46ffaa3d-9364-4297-9676-956836cedbf2 ("metrics-server-9975d5f86-nj5gr_kube-system(46ffaa3d-9364-4297-9676-956836cedbf2)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0816 18:36:36.345442  495127 logs.go:138] Found kubelet problem: Aug 16 18:33:07 old-k8s-version-686713 kubelet[661]: E0816 18:33:07.649752     661 pod_workers.go:191] Error syncing pod 7889e018-f267-47f4-beb4-d17b5643fc87 ("dashboard-metrics-scraper-8d5bb5db8-tkzd5_kubernetes-dashboard(7889e018-f267-47f4-beb4-d17b5643fc87)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-tkzd5_kubernetes-dashboard(7889e018-f267-47f4-beb4-d17b5643fc87)"
	W0816 18:36:36.345647  495127 logs.go:138] Found kubelet problem: Aug 16 18:33:17 old-k8s-version-686713 kubelet[661]: E0816 18:33:17.954137     661 pod_workers.go:191] Error syncing pod 46ffaa3d-9364-4297-9676-956836cedbf2 ("metrics-server-9975d5f86-nj5gr_kube-system(46ffaa3d-9364-4297-9676-956836cedbf2)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0816 18:36:36.346000  495127 logs.go:138] Found kubelet problem: Aug 16 18:33:21 old-k8s-version-686713 kubelet[661]: E0816 18:33:21.950218     661 pod_workers.go:191] Error syncing pod 7889e018-f267-47f4-beb4-d17b5643fc87 ("dashboard-metrics-scraper-8d5bb5db8-tkzd5_kubernetes-dashboard(7889e018-f267-47f4-beb4-d17b5643fc87)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-tkzd5_kubernetes-dashboard(7889e018-f267-47f4-beb4-d17b5643fc87)"
	W0816 18:36:36.346210  495127 logs.go:138] Found kubelet problem: Aug 16 18:33:29 old-k8s-version-686713 kubelet[661]: E0816 18:33:29.953172     661 pod_workers.go:191] Error syncing pod 46ffaa3d-9364-4297-9676-956836cedbf2 ("metrics-server-9975d5f86-nj5gr_kube-system(46ffaa3d-9364-4297-9676-956836cedbf2)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0816 18:36:36.346561  495127 logs.go:138] Found kubelet problem: Aug 16 18:33:35 old-k8s-version-686713 kubelet[661]: E0816 18:33:35.951401     661 pod_workers.go:191] Error syncing pod 7889e018-f267-47f4-beb4-d17b5643fc87 ("dashboard-metrics-scraper-8d5bb5db8-tkzd5_kubernetes-dashboard(7889e018-f267-47f4-beb4-d17b5643fc87)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-tkzd5_kubernetes-dashboard(7889e018-f267-47f4-beb4-d17b5643fc87)"
	W0816 18:36:36.346774  495127 logs.go:138] Found kubelet problem: Aug 16 18:33:43 old-k8s-version-686713 kubelet[661]: E0816 18:33:43.953295     661 pod_workers.go:191] Error syncing pod 46ffaa3d-9364-4297-9676-956836cedbf2 ("metrics-server-9975d5f86-nj5gr_kube-system(46ffaa3d-9364-4297-9676-956836cedbf2)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0816 18:36:36.347128  495127 logs.go:138] Found kubelet problem: Aug 16 18:33:47 old-k8s-version-686713 kubelet[661]: E0816 18:33:47.950982     661 pod_workers.go:191] Error syncing pod 7889e018-f267-47f4-beb4-d17b5643fc87 ("dashboard-metrics-scraper-8d5bb5db8-tkzd5_kubernetes-dashboard(7889e018-f267-47f4-beb4-d17b5643fc87)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-tkzd5_kubernetes-dashboard(7889e018-f267-47f4-beb4-d17b5643fc87)"
	W0816 18:36:36.347330  495127 logs.go:138] Found kubelet problem: Aug 16 18:33:55 old-k8s-version-686713 kubelet[661]: E0816 18:33:55.950598     661 pod_workers.go:191] Error syncing pod 46ffaa3d-9364-4297-9676-956836cedbf2 ("metrics-server-9975d5f86-nj5gr_kube-system(46ffaa3d-9364-4297-9676-956836cedbf2)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0816 18:36:36.347680  495127 logs.go:138] Found kubelet problem: Aug 16 18:34:01 old-k8s-version-686713 kubelet[661]: E0816 18:34:01.950707     661 pod_workers.go:191] Error syncing pod 7889e018-f267-47f4-beb4-d17b5643fc87 ("dashboard-metrics-scraper-8d5bb5db8-tkzd5_kubernetes-dashboard(7889e018-f267-47f4-beb4-d17b5643fc87)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-tkzd5_kubernetes-dashboard(7889e018-f267-47f4-beb4-d17b5643fc87)"
	W0816 18:36:36.347885  495127 logs.go:138] Found kubelet problem: Aug 16 18:34:09 old-k8s-version-686713 kubelet[661]: E0816 18:34:09.950579     661 pod_workers.go:191] Error syncing pod 46ffaa3d-9364-4297-9676-956836cedbf2 ("metrics-server-9975d5f86-nj5gr_kube-system(46ffaa3d-9364-4297-9676-956836cedbf2)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0816 18:36:36.348234  495127 logs.go:138] Found kubelet problem: Aug 16 18:34:15 old-k8s-version-686713 kubelet[661]: E0816 18:34:15.951359     661 pod_workers.go:191] Error syncing pod 7889e018-f267-47f4-beb4-d17b5643fc87 ("dashboard-metrics-scraper-8d5bb5db8-tkzd5_kubernetes-dashboard(7889e018-f267-47f4-beb4-d17b5643fc87)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-tkzd5_kubernetes-dashboard(7889e018-f267-47f4-beb4-d17b5643fc87)"
	W0816 18:36:36.351456  495127 logs.go:138] Found kubelet problem: Aug 16 18:34:23 old-k8s-version-686713 kubelet[661]: E0816 18:34:23.960857     661 pod_workers.go:191] Error syncing pod 46ffaa3d-9364-4297-9676-956836cedbf2 ("metrics-server-9975d5f86-nj5gr_kube-system(46ffaa3d-9364-4297-9676-956836cedbf2)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W0816 18:36:36.352115  495127 logs.go:138] Found kubelet problem: Aug 16 18:34:31 old-k8s-version-686713 kubelet[661]: E0816 18:34:31.201952     661 pod_workers.go:191] Error syncing pod 7889e018-f267-47f4-beb4-d17b5643fc87 ("dashboard-metrics-scraper-8d5bb5db8-tkzd5_kubernetes-dashboard(7889e018-f267-47f4-beb4-d17b5643fc87)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-tkzd5_kubernetes-dashboard(7889e018-f267-47f4-beb4-d17b5643fc87)"
	W0816 18:36:36.352472  495127 logs.go:138] Found kubelet problem: Aug 16 18:34:37 old-k8s-version-686713 kubelet[661]: E0816 18:34:37.650224     661 pod_workers.go:191] Error syncing pod 7889e018-f267-47f4-beb4-d17b5643fc87 ("dashboard-metrics-scraper-8d5bb5db8-tkzd5_kubernetes-dashboard(7889e018-f267-47f4-beb4-d17b5643fc87)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-tkzd5_kubernetes-dashboard(7889e018-f267-47f4-beb4-d17b5643fc87)"
	W0816 18:36:36.352681  495127 logs.go:138] Found kubelet problem: Aug 16 18:34:38 old-k8s-version-686713 kubelet[661]: E0816 18:34:38.950603     661 pod_workers.go:191] Error syncing pod 46ffaa3d-9364-4297-9676-956836cedbf2 ("metrics-server-9975d5f86-nj5gr_kube-system(46ffaa3d-9364-4297-9676-956836cedbf2)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0816 18:36:36.353048  495127 logs.go:138] Found kubelet problem: Aug 16 18:34:49 old-k8s-version-686713 kubelet[661]: E0816 18:34:49.950383     661 pod_workers.go:191] Error syncing pod 7889e018-f267-47f4-beb4-d17b5643fc87 ("dashboard-metrics-scraper-8d5bb5db8-tkzd5_kubernetes-dashboard(7889e018-f267-47f4-beb4-d17b5643fc87)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-tkzd5_kubernetes-dashboard(7889e018-f267-47f4-beb4-d17b5643fc87)"
	W0816 18:36:36.353258  495127 logs.go:138] Found kubelet problem: Aug 16 18:34:51 old-k8s-version-686713 kubelet[661]: E0816 18:34:51.950822     661 pod_workers.go:191] Error syncing pod 46ffaa3d-9364-4297-9676-956836cedbf2 ("metrics-server-9975d5f86-nj5gr_kube-system(46ffaa3d-9364-4297-9676-956836cedbf2)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0816 18:36:36.353609  495127 logs.go:138] Found kubelet problem: Aug 16 18:35:02 old-k8s-version-686713 kubelet[661]: E0816 18:35:02.950213     661 pod_workers.go:191] Error syncing pod 7889e018-f267-47f4-beb4-d17b5643fc87 ("dashboard-metrics-scraper-8d5bb5db8-tkzd5_kubernetes-dashboard(7889e018-f267-47f4-beb4-d17b5643fc87)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-tkzd5_kubernetes-dashboard(7889e018-f267-47f4-beb4-d17b5643fc87)"
	W0816 18:36:36.353818  495127 logs.go:138] Found kubelet problem: Aug 16 18:35:04 old-k8s-version-686713 kubelet[661]: E0816 18:35:04.950445     661 pod_workers.go:191] Error syncing pod 46ffaa3d-9364-4297-9676-956836cedbf2 ("metrics-server-9975d5f86-nj5gr_kube-system(46ffaa3d-9364-4297-9676-956836cedbf2)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0816 18:36:36.354181  495127 logs.go:138] Found kubelet problem: Aug 16 18:35:15 old-k8s-version-686713 kubelet[661]: E0816 18:35:15.950603     661 pod_workers.go:191] Error syncing pod 7889e018-f267-47f4-beb4-d17b5643fc87 ("dashboard-metrics-scraper-8d5bb5db8-tkzd5_kubernetes-dashboard(7889e018-f267-47f4-beb4-d17b5643fc87)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-tkzd5_kubernetes-dashboard(7889e018-f267-47f4-beb4-d17b5643fc87)"
	W0816 18:36:36.354390  495127 logs.go:138] Found kubelet problem: Aug 16 18:35:19 old-k8s-version-686713 kubelet[661]: E0816 18:35:19.955737     661 pod_workers.go:191] Error syncing pod 46ffaa3d-9364-4297-9676-956836cedbf2 ("metrics-server-9975d5f86-nj5gr_kube-system(46ffaa3d-9364-4297-9676-956836cedbf2)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0816 18:36:36.354741  495127 logs.go:138] Found kubelet problem: Aug 16 18:35:28 old-k8s-version-686713 kubelet[661]: E0816 18:35:28.950646     661 pod_workers.go:191] Error syncing pod 7889e018-f267-47f4-beb4-d17b5643fc87 ("dashboard-metrics-scraper-8d5bb5db8-tkzd5_kubernetes-dashboard(7889e018-f267-47f4-beb4-d17b5643fc87)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-tkzd5_kubernetes-dashboard(7889e018-f267-47f4-beb4-d17b5643fc87)"
	W0816 18:36:36.354946  495127 logs.go:138] Found kubelet problem: Aug 16 18:35:31 old-k8s-version-686713 kubelet[661]: E0816 18:35:31.950471     661 pod_workers.go:191] Error syncing pod 46ffaa3d-9364-4297-9676-956836cedbf2 ("metrics-server-9975d5f86-nj5gr_kube-system(46ffaa3d-9364-4297-9676-956836cedbf2)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0816 18:36:36.355296  495127 logs.go:138] Found kubelet problem: Aug 16 18:35:43 old-k8s-version-686713 kubelet[661]: E0816 18:35:43.955091     661 pod_workers.go:191] Error syncing pod 7889e018-f267-47f4-beb4-d17b5643fc87 ("dashboard-metrics-scraper-8d5bb5db8-tkzd5_kubernetes-dashboard(7889e018-f267-47f4-beb4-d17b5643fc87)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-tkzd5_kubernetes-dashboard(7889e018-f267-47f4-beb4-d17b5643fc87)"
	W0816 18:36:36.355503  495127 logs.go:138] Found kubelet problem: Aug 16 18:35:45 old-k8s-version-686713 kubelet[661]: E0816 18:35:45.950556     661 pod_workers.go:191] Error syncing pod 46ffaa3d-9364-4297-9676-956836cedbf2 ("metrics-server-9975d5f86-nj5gr_kube-system(46ffaa3d-9364-4297-9676-956836cedbf2)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0816 18:36:36.355854  495127 logs.go:138] Found kubelet problem: Aug 16 18:35:55 old-k8s-version-686713 kubelet[661]: E0816 18:35:55.950256     661 pod_workers.go:191] Error syncing pod 7889e018-f267-47f4-beb4-d17b5643fc87 ("dashboard-metrics-scraper-8d5bb5db8-tkzd5_kubernetes-dashboard(7889e018-f267-47f4-beb4-d17b5643fc87)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-tkzd5_kubernetes-dashboard(7889e018-f267-47f4-beb4-d17b5643fc87)"
	W0816 18:36:36.356061  495127 logs.go:138] Found kubelet problem: Aug 16 18:36:00 old-k8s-version-686713 kubelet[661]: E0816 18:36:00.951188     661 pod_workers.go:191] Error syncing pod 46ffaa3d-9364-4297-9676-956836cedbf2 ("metrics-server-9975d5f86-nj5gr_kube-system(46ffaa3d-9364-4297-9676-956836cedbf2)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0816 18:36:36.356410  495127 logs.go:138] Found kubelet problem: Aug 16 18:36:07 old-k8s-version-686713 kubelet[661]: E0816 18:36:07.951260     661 pod_workers.go:191] Error syncing pod 7889e018-f267-47f4-beb4-d17b5643fc87 ("dashboard-metrics-scraper-8d5bb5db8-tkzd5_kubernetes-dashboard(7889e018-f267-47f4-beb4-d17b5643fc87)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-tkzd5_kubernetes-dashboard(7889e018-f267-47f4-beb4-d17b5643fc87)"
	W0816 18:36:36.356622  495127 logs.go:138] Found kubelet problem: Aug 16 18:36:13 old-k8s-version-686713 kubelet[661]: E0816 18:36:13.954479     661 pod_workers.go:191] Error syncing pod 46ffaa3d-9364-4297-9676-956836cedbf2 ("metrics-server-9975d5f86-nj5gr_kube-system(46ffaa3d-9364-4297-9676-956836cedbf2)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0816 18:36:36.356954  495127 logs.go:138] Found kubelet problem: Aug 16 18:36:22 old-k8s-version-686713 kubelet[661]: E0816 18:36:22.950180     661 pod_workers.go:191] Error syncing pod 7889e018-f267-47f4-beb4-d17b5643fc87 ("dashboard-metrics-scraper-8d5bb5db8-tkzd5_kubernetes-dashboard(7889e018-f267-47f4-beb4-d17b5643fc87)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-tkzd5_kubernetes-dashboard(7889e018-f267-47f4-beb4-d17b5643fc87)"
	W0816 18:36:36.357148  495127 logs.go:138] Found kubelet problem: Aug 16 18:36:28 old-k8s-version-686713 kubelet[661]: E0816 18:36:28.950473     661 pod_workers.go:191] Error syncing pod 46ffaa3d-9364-4297-9676-956836cedbf2 ("metrics-server-9975d5f86-nj5gr_kube-system(46ffaa3d-9364-4297-9676-956836cedbf2)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0816 18:36:36.357475  495127 logs.go:138] Found kubelet problem: Aug 16 18:36:35 old-k8s-version-686713 kubelet[661]: E0816 18:36:35.958240     661 pod_workers.go:191] Error syncing pod 7889e018-f267-47f4-beb4-d17b5643fc87 ("dashboard-metrics-scraper-8d5bb5db8-tkzd5_kubernetes-dashboard(7889e018-f267-47f4-beb4-d17b5643fc87)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-tkzd5_kubernetes-dashboard(7889e018-f267-47f4-beb4-d17b5643fc87)"
	I0816 18:36:36.357485  495127 logs.go:123] Gathering logs for dmesg ...
	I0816 18:36:36.357498  495127 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:36:36.375291  495127 logs.go:123] Gathering logs for etcd [2f9ca8dbad25d5c189e46bfb5b1c0fb6ba3bfe49b1436a6da5f276e21a4bf6e9] ...
	I0816 18:36:36.375321  495127 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2f9ca8dbad25d5c189e46bfb5b1c0fb6ba3bfe49b1436a6da5f276e21a4bf6e9"
	I0816 18:36:36.420055  495127 logs.go:123] Gathering logs for kube-scheduler [b5415dfeafde650399f65c8eaa6780576175b5a6a7f3c05fe903fc0ab1c7752b] ...
	I0816 18:36:36.420086  495127 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b5415dfeafde650399f65c8eaa6780576175b5a6a7f3c05fe903fc0ab1c7752b"
	I0816 18:36:36.471058  495127 out.go:358] Setting ErrFile to fd 2...
	I0816 18:36:36.471082  495127 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0816 18:36:36.471168  495127 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0816 18:36:36.471181  495127 out.go:270]   Aug 16 18:36:07 old-k8s-version-686713 kubelet[661]: E0816 18:36:07.951260     661 pod_workers.go:191] Error syncing pod 7889e018-f267-47f4-beb4-d17b5643fc87 ("dashboard-metrics-scraper-8d5bb5db8-tkzd5_kubernetes-dashboard(7889e018-f267-47f4-beb4-d17b5643fc87)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-tkzd5_kubernetes-dashboard(7889e018-f267-47f4-beb4-d17b5643fc87)"
	  Aug 16 18:36:07 old-k8s-version-686713 kubelet[661]: E0816 18:36:07.951260     661 pod_workers.go:191] Error syncing pod 7889e018-f267-47f4-beb4-d17b5643fc87 ("dashboard-metrics-scraper-8d5bb5db8-tkzd5_kubernetes-dashboard(7889e018-f267-47f4-beb4-d17b5643fc87)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-tkzd5_kubernetes-dashboard(7889e018-f267-47f4-beb4-d17b5643fc87)"
	W0816 18:36:36.471221  495127 out.go:270]   Aug 16 18:36:13 old-k8s-version-686713 kubelet[661]: E0816 18:36:13.954479     661 pod_workers.go:191] Error syncing pod 46ffaa3d-9364-4297-9676-956836cedbf2 ("metrics-server-9975d5f86-nj5gr_kube-system(46ffaa3d-9364-4297-9676-956836cedbf2)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	  Aug 16 18:36:13 old-k8s-version-686713 kubelet[661]: E0816 18:36:13.954479     661 pod_workers.go:191] Error syncing pod 46ffaa3d-9364-4297-9676-956836cedbf2 ("metrics-server-9975d5f86-nj5gr_kube-system(46ffaa3d-9364-4297-9676-956836cedbf2)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0816 18:36:36.471229  495127 out.go:270]   Aug 16 18:36:22 old-k8s-version-686713 kubelet[661]: E0816 18:36:22.950180     661 pod_workers.go:191] Error syncing pod 7889e018-f267-47f4-beb4-d17b5643fc87 ("dashboard-metrics-scraper-8d5bb5db8-tkzd5_kubernetes-dashboard(7889e018-f267-47f4-beb4-d17b5643fc87)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-tkzd5_kubernetes-dashboard(7889e018-f267-47f4-beb4-d17b5643fc87)"
	  Aug 16 18:36:22 old-k8s-version-686713 kubelet[661]: E0816 18:36:22.950180     661 pod_workers.go:191] Error syncing pod 7889e018-f267-47f4-beb4-d17b5643fc87 ("dashboard-metrics-scraper-8d5bb5db8-tkzd5_kubernetes-dashboard(7889e018-f267-47f4-beb4-d17b5643fc87)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-tkzd5_kubernetes-dashboard(7889e018-f267-47f4-beb4-d17b5643fc87)"
	W0816 18:36:36.471235  495127 out.go:270]   Aug 16 18:36:28 old-k8s-version-686713 kubelet[661]: E0816 18:36:28.950473     661 pod_workers.go:191] Error syncing pod 46ffaa3d-9364-4297-9676-956836cedbf2 ("metrics-server-9975d5f86-nj5gr_kube-system(46ffaa3d-9364-4297-9676-956836cedbf2)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	  Aug 16 18:36:28 old-k8s-version-686713 kubelet[661]: E0816 18:36:28.950473     661 pod_workers.go:191] Error syncing pod 46ffaa3d-9364-4297-9676-956836cedbf2 ("metrics-server-9975d5f86-nj5gr_kube-system(46ffaa3d-9364-4297-9676-956836cedbf2)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0816 18:36:36.471242  495127 out.go:270]   Aug 16 18:36:35 old-k8s-version-686713 kubelet[661]: E0816 18:36:35.958240     661 pod_workers.go:191] Error syncing pod 7889e018-f267-47f4-beb4-d17b5643fc87 ("dashboard-metrics-scraper-8d5bb5db8-tkzd5_kubernetes-dashboard(7889e018-f267-47f4-beb4-d17b5643fc87)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-tkzd5_kubernetes-dashboard(7889e018-f267-47f4-beb4-d17b5643fc87)"
	  Aug 16 18:36:35 old-k8s-version-686713 kubelet[661]: E0816 18:36:35.958240     661 pod_workers.go:191] Error syncing pod 7889e018-f267-47f4-beb4-d17b5643fc87 ("dashboard-metrics-scraper-8d5bb5db8-tkzd5_kubernetes-dashboard(7889e018-f267-47f4-beb4-d17b5643fc87)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-tkzd5_kubernetes-dashboard(7889e018-f267-47f4-beb4-d17b5643fc87)"
	I0816 18:36:36.471249  495127 out.go:358] Setting ErrFile to fd 2...
	I0816 18:36:36.471256  495127 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 18:36:46.472746  495127 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:36:46.485597  495127 api_server.go:72] duration metric: took 5m50.522653293s to wait for apiserver process to appear ...
	I0816 18:36:46.485623  495127 api_server.go:88] waiting for apiserver healthz status ...
	I0816 18:36:46.485656  495127 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0816 18:36:46.485711  495127 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0816 18:36:46.523789  495127 cri.go:89] found id: "c9fb62cfb5636d23edd12742b0d3803abb87a3cecdf490382bebadcb3f66804d"
	I0816 18:36:46.523812  495127 cri.go:89] found id: "447b982d5e55ff3f7c2e0daf7eb85d57cc3778df20255ada1e82db270bb3c0ef"
	I0816 18:36:46.523817  495127 cri.go:89] found id: ""
	I0816 18:36:46.523824  495127 logs.go:276] 2 containers: [c9fb62cfb5636d23edd12742b0d3803abb87a3cecdf490382bebadcb3f66804d 447b982d5e55ff3f7c2e0daf7eb85d57cc3778df20255ada1e82db270bb3c0ef]
	I0816 18:36:46.523880  495127 ssh_runner.go:195] Run: which crictl
	I0816 18:36:46.528832  495127 ssh_runner.go:195] Run: which crictl
	I0816 18:36:46.532107  495127 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0816 18:36:46.532174  495127 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0816 18:36:46.578253  495127 cri.go:89] found id: "2f9ca8dbad25d5c189e46bfb5b1c0fb6ba3bfe49b1436a6da5f276e21a4bf6e9"
	I0816 18:36:46.578272  495127 cri.go:89] found id: "823d92b3aa088f0ae2ac13e28669fcb44185774a0ec41c94a4f70cb9d841f330"
	I0816 18:36:46.578277  495127 cri.go:89] found id: ""
	I0816 18:36:46.578284  495127 logs.go:276] 2 containers: [2f9ca8dbad25d5c189e46bfb5b1c0fb6ba3bfe49b1436a6da5f276e21a4bf6e9 823d92b3aa088f0ae2ac13e28669fcb44185774a0ec41c94a4f70cb9d841f330]
	I0816 18:36:46.578344  495127 ssh_runner.go:195] Run: which crictl
	I0816 18:36:46.582564  495127 ssh_runner.go:195] Run: which crictl
	I0816 18:36:46.586077  495127 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0816 18:36:46.586142  495127 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0816 18:36:46.628298  495127 cri.go:89] found id: "e1b93070d9316ad4002ac51e29d82e3750f6b8571f5b5fd96d31020533ec28df"
	I0816 18:36:46.628320  495127 cri.go:89] found id: "fd426e802a394dd085a78eceaa9e1ef8b8a1729bd688f37f444eee97c2e33625"
	I0816 18:36:46.628325  495127 cri.go:89] found id: ""
	I0816 18:36:46.628333  495127 logs.go:276] 2 containers: [e1b93070d9316ad4002ac51e29d82e3750f6b8571f5b5fd96d31020533ec28df fd426e802a394dd085a78eceaa9e1ef8b8a1729bd688f37f444eee97c2e33625]
	I0816 18:36:46.628414  495127 ssh_runner.go:195] Run: which crictl
	I0816 18:36:46.632532  495127 ssh_runner.go:195] Run: which crictl
	I0816 18:36:46.636035  495127 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0816 18:36:46.636101  495127 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0816 18:36:46.674563  495127 cri.go:89] found id: "67a319b21b20b2c80ae76382057598c268c780a5584c0a6774faabede5895141"
	I0816 18:36:46.674585  495127 cri.go:89] found id: "b5415dfeafde650399f65c8eaa6780576175b5a6a7f3c05fe903fc0ab1c7752b"
	I0816 18:36:46.674590  495127 cri.go:89] found id: ""
	I0816 18:36:46.674597  495127 logs.go:276] 2 containers: [67a319b21b20b2c80ae76382057598c268c780a5584c0a6774faabede5895141 b5415dfeafde650399f65c8eaa6780576175b5a6a7f3c05fe903fc0ab1c7752b]
	I0816 18:36:46.674654  495127 ssh_runner.go:195] Run: which crictl
	I0816 18:36:46.678371  495127 ssh_runner.go:195] Run: which crictl
	I0816 18:36:46.682128  495127 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0816 18:36:46.682198  495127 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0816 18:36:46.741343  495127 cri.go:89] found id: "70e77ee9f2cac8189460718220ff38a2c7e68efb6c2b3f9e6e0eed05f1567513"
	I0816 18:36:46.741364  495127 cri.go:89] found id: "c8808a95b766ea5da16f28d106fac8ed477a20a335d64d6f24a0d16441629040"
	I0816 18:36:46.741369  495127 cri.go:89] found id: ""
	I0816 18:36:46.741376  495127 logs.go:276] 2 containers: [70e77ee9f2cac8189460718220ff38a2c7e68efb6c2b3f9e6e0eed05f1567513 c8808a95b766ea5da16f28d106fac8ed477a20a335d64d6f24a0d16441629040]
	I0816 18:36:46.741448  495127 ssh_runner.go:195] Run: which crictl
	I0816 18:36:46.745164  495127 ssh_runner.go:195] Run: which crictl
	I0816 18:36:46.748401  495127 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0816 18:36:46.748467  495127 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0816 18:36:46.797523  495127 cri.go:89] found id: "6dc4fb429749dff987e078bd740c3d8ac34c76edbcf81d202b385019f44254e9"
	I0816 18:36:46.797547  495127 cri.go:89] found id: "78be83580cf2485e2d5a7a049e4681acf055c50835844bb627f268dc3bd5943b"
	I0816 18:36:46.797552  495127 cri.go:89] found id: ""
	I0816 18:36:46.797559  495127 logs.go:276] 2 containers: [6dc4fb429749dff987e078bd740c3d8ac34c76edbcf81d202b385019f44254e9 78be83580cf2485e2d5a7a049e4681acf055c50835844bb627f268dc3bd5943b]
	I0816 18:36:46.797636  495127 ssh_runner.go:195] Run: which crictl
	I0816 18:36:46.801326  495127 ssh_runner.go:195] Run: which crictl
	I0816 18:36:46.804818  495127 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0816 18:36:46.804894  495127 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0816 18:36:46.847681  495127 cri.go:89] found id: "ce04226e67d7d9412dc331daf4c7ee98a4483573ef231865fedf2e1017b08df0"
	I0816 18:36:46.847701  495127 cri.go:89] found id: "b1c5cc5a08d84ccf10b870b1da860bd7634039448156c7111a600e3a79596432"
	I0816 18:36:46.847706  495127 cri.go:89] found id: ""
	I0816 18:36:46.847713  495127 logs.go:276] 2 containers: [ce04226e67d7d9412dc331daf4c7ee98a4483573ef231865fedf2e1017b08df0 b1c5cc5a08d84ccf10b870b1da860bd7634039448156c7111a600e3a79596432]
	I0816 18:36:46.847777  495127 ssh_runner.go:195] Run: which crictl
	I0816 18:36:46.851606  495127 ssh_runner.go:195] Run: which crictl
	I0816 18:36:46.855316  495127 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0816 18:36:46.855413  495127 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0816 18:36:46.897537  495127 cri.go:89] found id: "83541051f882d1bb2230d26bbe44c443d1ed022d11ceafd571de268cfbdca830"
	I0816 18:36:46.897569  495127 cri.go:89] found id: "761eeb094020ad669cd5e4d3c642fc27e22a7c5c1ca7e43cf3f2bc72577e44d3"
	I0816 18:36:46.897575  495127 cri.go:89] found id: ""
	I0816 18:36:46.897583  495127 logs.go:276] 2 containers: [83541051f882d1bb2230d26bbe44c443d1ed022d11ceafd571de268cfbdca830 761eeb094020ad669cd5e4d3c642fc27e22a7c5c1ca7e43cf3f2bc72577e44d3]
	I0816 18:36:46.897646  495127 ssh_runner.go:195] Run: which crictl
	I0816 18:36:46.901341  495127 ssh_runner.go:195] Run: which crictl
	I0816 18:36:46.904712  495127 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0816 18:36:46.904784  495127 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0816 18:36:46.943980  495127 cri.go:89] found id: "3946f4bb419f390836cae4a5d5344626f7d8e29a3e27fdeadd245e6bebad7503"
	I0816 18:36:46.944007  495127 cri.go:89] found id: ""
	I0816 18:36:46.944015  495127 logs.go:276] 1 containers: [3946f4bb419f390836cae4a5d5344626f7d8e29a3e27fdeadd245e6bebad7503]
	I0816 18:36:46.944106  495127 ssh_runner.go:195] Run: which crictl
	I0816 18:36:46.947863  495127 logs.go:123] Gathering logs for kube-controller-manager [78be83580cf2485e2d5a7a049e4681acf055c50835844bb627f268dc3bd5943b] ...
	I0816 18:36:46.947887  495127 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 78be83580cf2485e2d5a7a049e4681acf055c50835844bb627f268dc3bd5943b"
	I0816 18:36:47.026295  495127 logs.go:123] Gathering logs for kindnet [b1c5cc5a08d84ccf10b870b1da860bd7634039448156c7111a600e3a79596432] ...
	I0816 18:36:47.026328  495127 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b1c5cc5a08d84ccf10b870b1da860bd7634039448156c7111a600e3a79596432"
	I0816 18:36:47.082558  495127 logs.go:123] Gathering logs for storage-provisioner [761eeb094020ad669cd5e4d3c642fc27e22a7c5c1ca7e43cf3f2bc72577e44d3] ...
	I0816 18:36:47.082592  495127 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 761eeb094020ad669cd5e4d3c642fc27e22a7c5c1ca7e43cf3f2bc72577e44d3"
	I0816 18:36:47.120921  495127 logs.go:123] Gathering logs for containerd ...
	I0816 18:36:47.120947  495127 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0816 18:36:47.188027  495127 logs.go:123] Gathering logs for etcd [823d92b3aa088f0ae2ac13e28669fcb44185774a0ec41c94a4f70cb9d841f330] ...
	I0816 18:36:47.188064  495127 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 823d92b3aa088f0ae2ac13e28669fcb44185774a0ec41c94a4f70cb9d841f330"
	I0816 18:36:47.255082  495127 logs.go:123] Gathering logs for kube-scheduler [b5415dfeafde650399f65c8eaa6780576175b5a6a7f3c05fe903fc0ab1c7752b] ...
	I0816 18:36:47.255113  495127 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b5415dfeafde650399f65c8eaa6780576175b5a6a7f3c05fe903fc0ab1c7752b"
	I0816 18:36:47.330129  495127 logs.go:123] Gathering logs for kube-proxy [70e77ee9f2cac8189460718220ff38a2c7e68efb6c2b3f9e6e0eed05f1567513] ...
	I0816 18:36:47.330162  495127 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 70e77ee9f2cac8189460718220ff38a2c7e68efb6c2b3f9e6e0eed05f1567513"
	I0816 18:36:47.399918  495127 logs.go:123] Gathering logs for kube-controller-manager [6dc4fb429749dff987e078bd740c3d8ac34c76edbcf81d202b385019f44254e9] ...
	I0816 18:36:47.399949  495127 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6dc4fb429749dff987e078bd740c3d8ac34c76edbcf81d202b385019f44254e9"
	I0816 18:36:47.501382  495127 logs.go:123] Gathering logs for container status ...
	I0816 18:36:47.501412  495127 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0816 18:36:47.581218  495127 logs.go:123] Gathering logs for describe nodes ...
	I0816 18:36:47.581247  495127 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0816 18:36:47.764622  495127 logs.go:123] Gathering logs for etcd [2f9ca8dbad25d5c189e46bfb5b1c0fb6ba3bfe49b1436a6da5f276e21a4bf6e9] ...
	I0816 18:36:47.764658  495127 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2f9ca8dbad25d5c189e46bfb5b1c0fb6ba3bfe49b1436a6da5f276e21a4bf6e9"
	I0816 18:36:47.833748  495127 logs.go:123] Gathering logs for coredns [e1b93070d9316ad4002ac51e29d82e3750f6b8571f5b5fd96d31020533ec28df] ...
	I0816 18:36:47.833779  495127 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e1b93070d9316ad4002ac51e29d82e3750f6b8571f5b5fd96d31020533ec28df"
	I0816 18:36:47.894458  495127 logs.go:123] Gathering logs for kubernetes-dashboard [3946f4bb419f390836cae4a5d5344626f7d8e29a3e27fdeadd245e6bebad7503] ...
	I0816 18:36:47.894486  495127 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3946f4bb419f390836cae4a5d5344626f7d8e29a3e27fdeadd245e6bebad7503"
	I0816 18:36:47.937944  495127 logs.go:123] Gathering logs for storage-provisioner [83541051f882d1bb2230d26bbe44c443d1ed022d11ceafd571de268cfbdca830] ...
	I0816 18:36:47.937977  495127 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 83541051f882d1bb2230d26bbe44c443d1ed022d11ceafd571de268cfbdca830"
	I0816 18:36:48.057666  495127 logs.go:123] Gathering logs for dmesg ...
	I0816 18:36:48.057698  495127 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0816 18:36:48.081863  495127 logs.go:123] Gathering logs for kube-apiserver [c9fb62cfb5636d23edd12742b0d3803abb87a3cecdf490382bebadcb3f66804d] ...
	I0816 18:36:48.081993  495127 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c9fb62cfb5636d23edd12742b0d3803abb87a3cecdf490382bebadcb3f66804d"
	I0816 18:36:48.166065  495127 logs.go:123] Gathering logs for kube-apiserver [447b982d5e55ff3f7c2e0daf7eb85d57cc3778df20255ada1e82db270bb3c0ef] ...
	I0816 18:36:48.166144  495127 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 447b982d5e55ff3f7c2e0daf7eb85d57cc3778df20255ada1e82db270bb3c0ef"
	I0816 18:36:48.241742  495127 logs.go:123] Gathering logs for kube-scheduler [67a319b21b20b2c80ae76382057598c268c780a5584c0a6774faabede5895141] ...
	I0816 18:36:48.241820  495127 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 67a319b21b20b2c80ae76382057598c268c780a5584c0a6774faabede5895141"
	I0816 18:36:48.307691  495127 logs.go:123] Gathering logs for kubelet ...
	I0816 18:36:48.307717  495127 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0816 18:36:48.398753  495127 logs.go:138] Found kubelet problem: Aug 16 18:31:19 old-k8s-version-686713 kubelet[661]: E0816 18:31:19.709619     661 pod_workers.go:191] Error syncing pod 46ffaa3d-9364-4297-9676-956836cedbf2 ("metrics-server-9975d5f86-nj5gr_kube-system(46ffaa3d-9364-4297-9676-956836cedbf2)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W0816 18:36:48.399080  495127 logs.go:138] Found kubelet problem: Aug 16 18:31:20 old-k8s-version-686713 kubelet[661]: E0816 18:31:20.578848     661 pod_workers.go:191] Error syncing pod 46ffaa3d-9364-4297-9676-956836cedbf2 ("metrics-server-9975d5f86-nj5gr_kube-system(46ffaa3d-9364-4297-9676-956836cedbf2)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0816 18:36:48.403019  495127 logs.go:138] Found kubelet problem: Aug 16 18:31:35 old-k8s-version-686713 kubelet[661]: E0816 18:31:35.964767     661 pod_workers.go:191] Error syncing pod 46ffaa3d-9364-4297-9676-956836cedbf2 ("metrics-server-9975d5f86-nj5gr_kube-system(46ffaa3d-9364-4297-9676-956836cedbf2)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W0816 18:36:48.404939  495127 logs.go:138] Found kubelet problem: Aug 16 18:31:42 old-k8s-version-686713 kubelet[661]: E0816 18:31:42.681634     661 pod_workers.go:191] Error syncing pod 7889e018-f267-47f4-beb4-d17b5643fc87 ("dashboard-metrics-scraper-8d5bb5db8-tkzd5_kubernetes-dashboard(7889e018-f267-47f4-beb4-d17b5643fc87)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-tkzd5_kubernetes-dashboard(7889e018-f267-47f4-beb4-d17b5643fc87)"
	W0816 18:36:48.405337  495127 logs.go:138] Found kubelet problem: Aug 16 18:31:43 old-k8s-version-686713 kubelet[661]: E0816 18:31:43.679894     661 pod_workers.go:191] Error syncing pod 7889e018-f267-47f4-beb4-d17b5643fc87 ("dashboard-metrics-scraper-8d5bb5db8-tkzd5_kubernetes-dashboard(7889e018-f267-47f4-beb4-d17b5643fc87)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-tkzd5_kubernetes-dashboard(7889e018-f267-47f4-beb4-d17b5643fc87)"
	W0816 18:36:48.406105  495127 logs.go:138] Found kubelet problem: Aug 16 18:31:47 old-k8s-version-686713 kubelet[661]: E0816 18:31:47.649890     661 pod_workers.go:191] Error syncing pod 7889e018-f267-47f4-beb4-d17b5643fc87 ("dashboard-metrics-scraper-8d5bb5db8-tkzd5_kubernetes-dashboard(7889e018-f267-47f4-beb4-d17b5643fc87)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-tkzd5_kubernetes-dashboard(7889e018-f267-47f4-beb4-d17b5643fc87)"
	W0816 18:36:48.406353  495127 logs.go:138] Found kubelet problem: Aug 16 18:31:47 old-k8s-version-686713 kubelet[661]: E0816 18:31:47.950964     661 pod_workers.go:191] Error syncing pod 46ffaa3d-9364-4297-9676-956836cedbf2 ("metrics-server-9975d5f86-nj5gr_kube-system(46ffaa3d-9364-4297-9676-956836cedbf2)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0816 18:36:48.406843  495127 logs.go:138] Found kubelet problem: Aug 16 18:31:51 old-k8s-version-686713 kubelet[661]: E0816 18:31:51.702800     661 pod_workers.go:191] Error syncing pod 657e9855-bf06-453c-b32b-8665ce255ff7 ("storage-provisioner_kube-system(657e9855-bf06-453c-b32b-8665ce255ff7)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(657e9855-bf06-453c-b32b-8665ce255ff7)"
	W0816 18:36:48.407866  495127 logs.go:138] Found kubelet problem: Aug 16 18:31:59 old-k8s-version-686713 kubelet[661]: E0816 18:31:59.727646     661 pod_workers.go:191] Error syncing pod 7889e018-f267-47f4-beb4-d17b5643fc87 ("dashboard-metrics-scraper-8d5bb5db8-tkzd5_kubernetes-dashboard(7889e018-f267-47f4-beb4-d17b5643fc87)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-tkzd5_kubernetes-dashboard(7889e018-f267-47f4-beb4-d17b5643fc87)"
	W0816 18:36:48.410502  495127 logs.go:138] Found kubelet problem: Aug 16 18:32:00 old-k8s-version-686713 kubelet[661]: E0816 18:32:00.969457     661 pod_workers.go:191] Error syncing pod 46ffaa3d-9364-4297-9676-956836cedbf2 ("metrics-server-9975d5f86-nj5gr_kube-system(46ffaa3d-9364-4297-9676-956836cedbf2)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W0816 18:36:48.411039  495127 logs.go:138] Found kubelet problem: Aug 16 18:32:07 old-k8s-version-686713 kubelet[661]: E0816 18:32:07.650284     661 pod_workers.go:191] Error syncing pod 7889e018-f267-47f4-beb4-d17b5643fc87 ("dashboard-metrics-scraper-8d5bb5db8-tkzd5_kubernetes-dashboard(7889e018-f267-47f4-beb4-d17b5643fc87)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-tkzd5_kubernetes-dashboard(7889e018-f267-47f4-beb4-d17b5643fc87)"
	W0816 18:36:48.411270  495127 logs.go:138] Found kubelet problem: Aug 16 18:32:11 old-k8s-version-686713 kubelet[661]: E0816 18:32:11.950723     661 pod_workers.go:191] Error syncing pod 46ffaa3d-9364-4297-9676-956836cedbf2 ("metrics-server-9975d5f86-nj5gr_kube-system(46ffaa3d-9364-4297-9676-956836cedbf2)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0816 18:36:48.411925  495127 logs.go:138] Found kubelet problem: Aug 16 18:32:22 old-k8s-version-686713 kubelet[661]: E0816 18:32:22.799899     661 pod_workers.go:191] Error syncing pod 7889e018-f267-47f4-beb4-d17b5643fc87 ("dashboard-metrics-scraper-8d5bb5db8-tkzd5_kubernetes-dashboard(7889e018-f267-47f4-beb4-d17b5643fc87)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-tkzd5_kubernetes-dashboard(7889e018-f267-47f4-beb4-d17b5643fc87)"
	W0816 18:36:48.412233  495127 logs.go:138] Found kubelet problem: Aug 16 18:32:26 old-k8s-version-686713 kubelet[661]: E0816 18:32:26.950439     661 pod_workers.go:191] Error syncing pod 46ffaa3d-9364-4297-9676-956836cedbf2 ("metrics-server-9975d5f86-nj5gr_kube-system(46ffaa3d-9364-4297-9676-956836cedbf2)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0816 18:36:48.412620  495127 logs.go:138] Found kubelet problem: Aug 16 18:32:27 old-k8s-version-686713 kubelet[661]: E0816 18:32:27.650291     661 pod_workers.go:191] Error syncing pod 7889e018-f267-47f4-beb4-d17b5643fc87 ("dashboard-metrics-scraper-8d5bb5db8-tkzd5_kubernetes-dashboard(7889e018-f267-47f4-beb4-d17b5643fc87)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-tkzd5_kubernetes-dashboard(7889e018-f267-47f4-beb4-d17b5643fc87)"
	W0816 18:36:48.413037  495127 logs.go:138] Found kubelet problem: Aug 16 18:32:38 old-k8s-version-686713 kubelet[661]: E0816 18:32:38.950175     661 pod_workers.go:191] Error syncing pod 7889e018-f267-47f4-beb4-d17b5643fc87 ("dashboard-metrics-scraper-8d5bb5db8-tkzd5_kubernetes-dashboard(7889e018-f267-47f4-beb4-d17b5643fc87)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-tkzd5_kubernetes-dashboard(7889e018-f267-47f4-beb4-d17b5643fc87)"
	W0816 18:36:48.413264  495127 logs.go:138] Found kubelet problem: Aug 16 18:32:39 old-k8s-version-686713 kubelet[661]: E0816 18:32:39.953204     661 pod_workers.go:191] Error syncing pod 46ffaa3d-9364-4297-9676-956836cedbf2 ("metrics-server-9975d5f86-nj5gr_kube-system(46ffaa3d-9364-4297-9676-956836cedbf2)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0816 18:36:48.413634  495127 logs.go:138] Found kubelet problem: Aug 16 18:32:51 old-k8s-version-686713 kubelet[661]: E0816 18:32:51.951277     661 pod_workers.go:191] Error syncing pod 7889e018-f267-47f4-beb4-d17b5643fc87 ("dashboard-metrics-scraper-8d5bb5db8-tkzd5_kubernetes-dashboard(7889e018-f267-47f4-beb4-d17b5643fc87)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-tkzd5_kubernetes-dashboard(7889e018-f267-47f4-beb4-d17b5643fc87)"
	W0816 18:36:48.416217  495127 logs.go:138] Found kubelet problem: Aug 16 18:32:51 old-k8s-version-686713 kubelet[661]: E0816 18:32:51.968189     661 pod_workers.go:191] Error syncing pod 46ffaa3d-9364-4297-9676-956836cedbf2 ("metrics-server-9975d5f86-nj5gr_kube-system(46ffaa3d-9364-4297-9676-956836cedbf2)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W0816 18:36:48.416891  495127 logs.go:138] Found kubelet problem: Aug 16 18:33:05 old-k8s-version-686713 kubelet[661]: E0816 18:33:05.923756     661 pod_workers.go:191] Error syncing pod 7889e018-f267-47f4-beb4-d17b5643fc87 ("dashboard-metrics-scraper-8d5bb5db8-tkzd5_kubernetes-dashboard(7889e018-f267-47f4-beb4-d17b5643fc87)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-tkzd5_kubernetes-dashboard(7889e018-f267-47f4-beb4-d17b5643fc87)"
	W0816 18:36:48.417134  495127 logs.go:138] Found kubelet problem: Aug 16 18:33:06 old-k8s-version-686713 kubelet[661]: E0816 18:33:06.950498     661 pod_workers.go:191] Error syncing pod 46ffaa3d-9364-4297-9676-956836cedbf2 ("metrics-server-9975d5f86-nj5gr_kube-system(46ffaa3d-9364-4297-9676-956836cedbf2)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0816 18:36:48.417528  495127 logs.go:138] Found kubelet problem: Aug 16 18:33:07 old-k8s-version-686713 kubelet[661]: E0816 18:33:07.649752     661 pod_workers.go:191] Error syncing pod 7889e018-f267-47f4-beb4-d17b5643fc87 ("dashboard-metrics-scraper-8d5bb5db8-tkzd5_kubernetes-dashboard(7889e018-f267-47f4-beb4-d17b5643fc87)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-tkzd5_kubernetes-dashboard(7889e018-f267-47f4-beb4-d17b5643fc87)"
	W0816 18:36:48.417760  495127 logs.go:138] Found kubelet problem: Aug 16 18:33:17 old-k8s-version-686713 kubelet[661]: E0816 18:33:17.954137     661 pod_workers.go:191] Error syncing pod 46ffaa3d-9364-4297-9676-956836cedbf2 ("metrics-server-9975d5f86-nj5gr_kube-system(46ffaa3d-9364-4297-9676-956836cedbf2)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0816 18:36:48.418184  495127 logs.go:138] Found kubelet problem: Aug 16 18:33:21 old-k8s-version-686713 kubelet[661]: E0816 18:33:21.950218     661 pod_workers.go:191] Error syncing pod 7889e018-f267-47f4-beb4-d17b5643fc87 ("dashboard-metrics-scraper-8d5bb5db8-tkzd5_kubernetes-dashboard(7889e018-f267-47f4-beb4-d17b5643fc87)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-tkzd5_kubernetes-dashboard(7889e018-f267-47f4-beb4-d17b5643fc87)"
	W0816 18:36:48.418421  495127 logs.go:138] Found kubelet problem: Aug 16 18:33:29 old-k8s-version-686713 kubelet[661]: E0816 18:33:29.953172     661 pod_workers.go:191] Error syncing pod 46ffaa3d-9364-4297-9676-956836cedbf2 ("metrics-server-9975d5f86-nj5gr_kube-system(46ffaa3d-9364-4297-9676-956836cedbf2)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0816 18:36:48.418800  495127 logs.go:138] Found kubelet problem: Aug 16 18:33:35 old-k8s-version-686713 kubelet[661]: E0816 18:33:35.951401     661 pod_workers.go:191] Error syncing pod 7889e018-f267-47f4-beb4-d17b5643fc87 ("dashboard-metrics-scraper-8d5bb5db8-tkzd5_kubernetes-dashboard(7889e018-f267-47f4-beb4-d17b5643fc87)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-tkzd5_kubernetes-dashboard(7889e018-f267-47f4-beb4-d17b5643fc87)"
	W0816 18:36:48.419079  495127 logs.go:138] Found kubelet problem: Aug 16 18:33:43 old-k8s-version-686713 kubelet[661]: E0816 18:33:43.953295     661 pod_workers.go:191] Error syncing pod 46ffaa3d-9364-4297-9676-956836cedbf2 ("metrics-server-9975d5f86-nj5gr_kube-system(46ffaa3d-9364-4297-9676-956836cedbf2)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0816 18:36:48.419510  495127 logs.go:138] Found kubelet problem: Aug 16 18:33:47 old-k8s-version-686713 kubelet[661]: E0816 18:33:47.950982     661 pod_workers.go:191] Error syncing pod 7889e018-f267-47f4-beb4-d17b5643fc87 ("dashboard-metrics-scraper-8d5bb5db8-tkzd5_kubernetes-dashboard(7889e018-f267-47f4-beb4-d17b5643fc87)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-tkzd5_kubernetes-dashboard(7889e018-f267-47f4-beb4-d17b5643fc87)"
	W0816 18:36:48.419797  495127 logs.go:138] Found kubelet problem: Aug 16 18:33:55 old-k8s-version-686713 kubelet[661]: E0816 18:33:55.950598     661 pod_workers.go:191] Error syncing pod 46ffaa3d-9364-4297-9676-956836cedbf2 ("metrics-server-9975d5f86-nj5gr_kube-system(46ffaa3d-9364-4297-9676-956836cedbf2)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0816 18:36:48.420222  495127 logs.go:138] Found kubelet problem: Aug 16 18:34:01 old-k8s-version-686713 kubelet[661]: E0816 18:34:01.950707     661 pod_workers.go:191] Error syncing pod 7889e018-f267-47f4-beb4-d17b5643fc87 ("dashboard-metrics-scraper-8d5bb5db8-tkzd5_kubernetes-dashboard(7889e018-f267-47f4-beb4-d17b5643fc87)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-tkzd5_kubernetes-dashboard(7889e018-f267-47f4-beb4-d17b5643fc87)"
	W0816 18:36:48.420525  495127 logs.go:138] Found kubelet problem: Aug 16 18:34:09 old-k8s-version-686713 kubelet[661]: E0816 18:34:09.950579     661 pod_workers.go:191] Error syncing pod 46ffaa3d-9364-4297-9676-956836cedbf2 ("metrics-server-9975d5f86-nj5gr_kube-system(46ffaa3d-9364-4297-9676-956836cedbf2)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0816 18:36:48.420917  495127 logs.go:138] Found kubelet problem: Aug 16 18:34:15 old-k8s-version-686713 kubelet[661]: E0816 18:34:15.951359     661 pod_workers.go:191] Error syncing pod 7889e018-f267-47f4-beb4-d17b5643fc87 ("dashboard-metrics-scraper-8d5bb5db8-tkzd5_kubernetes-dashboard(7889e018-f267-47f4-beb4-d17b5643fc87)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-tkzd5_kubernetes-dashboard(7889e018-f267-47f4-beb4-d17b5643fc87)"
	W0816 18:36:48.423690  495127 logs.go:138] Found kubelet problem: Aug 16 18:34:23 old-k8s-version-686713 kubelet[661]: E0816 18:34:23.960857     661 pod_workers.go:191] Error syncing pod 46ffaa3d-9364-4297-9676-956836cedbf2 ("metrics-server-9975d5f86-nj5gr_kube-system(46ffaa3d-9364-4297-9676-956836cedbf2)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W0816 18:36:48.424322  495127 logs.go:138] Found kubelet problem: Aug 16 18:34:31 old-k8s-version-686713 kubelet[661]: E0816 18:34:31.201952     661 pod_workers.go:191] Error syncing pod 7889e018-f267-47f4-beb4-d17b5643fc87 ("dashboard-metrics-scraper-8d5bb5db8-tkzd5_kubernetes-dashboard(7889e018-f267-47f4-beb4-d17b5643fc87)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-tkzd5_kubernetes-dashboard(7889e018-f267-47f4-beb4-d17b5643fc87)"
	W0816 18:36:48.424657  495127 logs.go:138] Found kubelet problem: Aug 16 18:34:37 old-k8s-version-686713 kubelet[661]: E0816 18:34:37.650224     661 pod_workers.go:191] Error syncing pod 7889e018-f267-47f4-beb4-d17b5643fc87 ("dashboard-metrics-scraper-8d5bb5db8-tkzd5_kubernetes-dashboard(7889e018-f267-47f4-beb4-d17b5643fc87)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-tkzd5_kubernetes-dashboard(7889e018-f267-47f4-beb4-d17b5643fc87)"
	W0816 18:36:48.424837  495127 logs.go:138] Found kubelet problem: Aug 16 18:34:38 old-k8s-version-686713 kubelet[661]: E0816 18:34:38.950603     661 pod_workers.go:191] Error syncing pod 46ffaa3d-9364-4297-9676-956836cedbf2 ("metrics-server-9975d5f86-nj5gr_kube-system(46ffaa3d-9364-4297-9676-956836cedbf2)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0816 18:36:48.425291  495127 logs.go:138] Found kubelet problem: Aug 16 18:34:49 old-k8s-version-686713 kubelet[661]: E0816 18:34:49.950383     661 pod_workers.go:191] Error syncing pod 7889e018-f267-47f4-beb4-d17b5643fc87 ("dashboard-metrics-scraper-8d5bb5db8-tkzd5_kubernetes-dashboard(7889e018-f267-47f4-beb4-d17b5643fc87)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-tkzd5_kubernetes-dashboard(7889e018-f267-47f4-beb4-d17b5643fc87)"
	W0816 18:36:48.425514  495127 logs.go:138] Found kubelet problem: Aug 16 18:34:51 old-k8s-version-686713 kubelet[661]: E0816 18:34:51.950822     661 pod_workers.go:191] Error syncing pod 46ffaa3d-9364-4297-9676-956836cedbf2 ("metrics-server-9975d5f86-nj5gr_kube-system(46ffaa3d-9364-4297-9676-956836cedbf2)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0816 18:36:48.425895  495127 logs.go:138] Found kubelet problem: Aug 16 18:35:02 old-k8s-version-686713 kubelet[661]: E0816 18:35:02.950213     661 pod_workers.go:191] Error syncing pod 7889e018-f267-47f4-beb4-d17b5643fc87 ("dashboard-metrics-scraper-8d5bb5db8-tkzd5_kubernetes-dashboard(7889e018-f267-47f4-beb4-d17b5643fc87)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-tkzd5_kubernetes-dashboard(7889e018-f267-47f4-beb4-d17b5643fc87)"
	W0816 18:36:48.426136  495127 logs.go:138] Found kubelet problem: Aug 16 18:35:04 old-k8s-version-686713 kubelet[661]: E0816 18:35:04.950445     661 pod_workers.go:191] Error syncing pod 46ffaa3d-9364-4297-9676-956836cedbf2 ("metrics-server-9975d5f86-nj5gr_kube-system(46ffaa3d-9364-4297-9676-956836cedbf2)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0816 18:36:48.426549  495127 logs.go:138] Found kubelet problem: Aug 16 18:35:15 old-k8s-version-686713 kubelet[661]: E0816 18:35:15.950603     661 pod_workers.go:191] Error syncing pod 7889e018-f267-47f4-beb4-d17b5643fc87 ("dashboard-metrics-scraper-8d5bb5db8-tkzd5_kubernetes-dashboard(7889e018-f267-47f4-beb4-d17b5643fc87)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-tkzd5_kubernetes-dashboard(7889e018-f267-47f4-beb4-d17b5643fc87)"
	W0816 18:36:48.426772  495127 logs.go:138] Found kubelet problem: Aug 16 18:35:19 old-k8s-version-686713 kubelet[661]: E0816 18:35:19.955737     661 pod_workers.go:191] Error syncing pod 46ffaa3d-9364-4297-9676-956836cedbf2 ("metrics-server-9975d5f86-nj5gr_kube-system(46ffaa3d-9364-4297-9676-956836cedbf2)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0816 18:36:48.427192  495127 logs.go:138] Found kubelet problem: Aug 16 18:35:28 old-k8s-version-686713 kubelet[661]: E0816 18:35:28.950646     661 pod_workers.go:191] Error syncing pod 7889e018-f267-47f4-beb4-d17b5643fc87 ("dashboard-metrics-scraper-8d5bb5db8-tkzd5_kubernetes-dashboard(7889e018-f267-47f4-beb4-d17b5643fc87)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-tkzd5_kubernetes-dashboard(7889e018-f267-47f4-beb4-d17b5643fc87)"
	W0816 18:36:48.427487  495127 logs.go:138] Found kubelet problem: Aug 16 18:35:31 old-k8s-version-686713 kubelet[661]: E0816 18:35:31.950471     661 pod_workers.go:191] Error syncing pod 46ffaa3d-9364-4297-9676-956836cedbf2 ("metrics-server-9975d5f86-nj5gr_kube-system(46ffaa3d-9364-4297-9676-956836cedbf2)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0816 18:36:48.427876  495127 logs.go:138] Found kubelet problem: Aug 16 18:35:43 old-k8s-version-686713 kubelet[661]: E0816 18:35:43.955091     661 pod_workers.go:191] Error syncing pod 7889e018-f267-47f4-beb4-d17b5643fc87 ("dashboard-metrics-scraper-8d5bb5db8-tkzd5_kubernetes-dashboard(7889e018-f267-47f4-beb4-d17b5643fc87)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-tkzd5_kubernetes-dashboard(7889e018-f267-47f4-beb4-d17b5643fc87)"
	W0816 18:36:48.428108  495127 logs.go:138] Found kubelet problem: Aug 16 18:35:45 old-k8s-version-686713 kubelet[661]: E0816 18:35:45.950556     661 pod_workers.go:191] Error syncing pod 46ffaa3d-9364-4297-9676-956836cedbf2 ("metrics-server-9975d5f86-nj5gr_kube-system(46ffaa3d-9364-4297-9676-956836cedbf2)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0816 18:36:48.428506  495127 logs.go:138] Found kubelet problem: Aug 16 18:35:55 old-k8s-version-686713 kubelet[661]: E0816 18:35:55.950256     661 pod_workers.go:191] Error syncing pod 7889e018-f267-47f4-beb4-d17b5643fc87 ("dashboard-metrics-scraper-8d5bb5db8-tkzd5_kubernetes-dashboard(7889e018-f267-47f4-beb4-d17b5643fc87)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-tkzd5_kubernetes-dashboard(7889e018-f267-47f4-beb4-d17b5643fc87)"
	W0816 18:36:48.428730  495127 logs.go:138] Found kubelet problem: Aug 16 18:36:00 old-k8s-version-686713 kubelet[661]: E0816 18:36:00.951188     661 pod_workers.go:191] Error syncing pod 46ffaa3d-9364-4297-9676-956836cedbf2 ("metrics-server-9975d5f86-nj5gr_kube-system(46ffaa3d-9364-4297-9676-956836cedbf2)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0816 18:36:48.429119  495127 logs.go:138] Found kubelet problem: Aug 16 18:36:07 old-k8s-version-686713 kubelet[661]: E0816 18:36:07.951260     661 pod_workers.go:191] Error syncing pod 7889e018-f267-47f4-beb4-d17b5643fc87 ("dashboard-metrics-scraper-8d5bb5db8-tkzd5_kubernetes-dashboard(7889e018-f267-47f4-beb4-d17b5643fc87)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-tkzd5_kubernetes-dashboard(7889e018-f267-47f4-beb4-d17b5643fc87)"
	W0816 18:36:48.429344  495127 logs.go:138] Found kubelet problem: Aug 16 18:36:13 old-k8s-version-686713 kubelet[661]: E0816 18:36:13.954479     661 pod_workers.go:191] Error syncing pod 46ffaa3d-9364-4297-9676-956836cedbf2 ("metrics-server-9975d5f86-nj5gr_kube-system(46ffaa3d-9364-4297-9676-956836cedbf2)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0816 18:36:48.429734  495127 logs.go:138] Found kubelet problem: Aug 16 18:36:22 old-k8s-version-686713 kubelet[661]: E0816 18:36:22.950180     661 pod_workers.go:191] Error syncing pod 7889e018-f267-47f4-beb4-d17b5643fc87 ("dashboard-metrics-scraper-8d5bb5db8-tkzd5_kubernetes-dashboard(7889e018-f267-47f4-beb4-d17b5643fc87)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-tkzd5_kubernetes-dashboard(7889e018-f267-47f4-beb4-d17b5643fc87)"
	W0816 18:36:48.430016  495127 logs.go:138] Found kubelet problem: Aug 16 18:36:28 old-k8s-version-686713 kubelet[661]: E0816 18:36:28.950473     661 pod_workers.go:191] Error syncing pod 46ffaa3d-9364-4297-9676-956836cedbf2 ("metrics-server-9975d5f86-nj5gr_kube-system(46ffaa3d-9364-4297-9676-956836cedbf2)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0816 18:36:48.430440  495127 logs.go:138] Found kubelet problem: Aug 16 18:36:35 old-k8s-version-686713 kubelet[661]: E0816 18:36:35.958240     661 pod_workers.go:191] Error syncing pod 7889e018-f267-47f4-beb4-d17b5643fc87 ("dashboard-metrics-scraper-8d5bb5db8-tkzd5_kubernetes-dashboard(7889e018-f267-47f4-beb4-d17b5643fc87)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-tkzd5_kubernetes-dashboard(7889e018-f267-47f4-beb4-d17b5643fc87)"
	W0816 18:36:48.430664  495127 logs.go:138] Found kubelet problem: Aug 16 18:36:39 old-k8s-version-686713 kubelet[661]: E0816 18:36:39.953696     661 pod_workers.go:191] Error syncing pod 46ffaa3d-9364-4297-9676-956836cedbf2 ("metrics-server-9975d5f86-nj5gr_kube-system(46ffaa3d-9364-4297-9676-956836cedbf2)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0816 18:36:48.431047  495127 logs.go:138] Found kubelet problem: Aug 16 18:36:47 old-k8s-version-686713 kubelet[661]: E0816 18:36:47.950244     661 pod_workers.go:191] Error syncing pod 7889e018-f267-47f4-beb4-d17b5643fc87 ("dashboard-metrics-scraper-8d5bb5db8-tkzd5_kubernetes-dashboard(7889e018-f267-47f4-beb4-d17b5643fc87)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-tkzd5_kubernetes-dashboard(7889e018-f267-47f4-beb4-d17b5643fc87)"
	I0816 18:36:48.431075  495127 logs.go:123] Gathering logs for coredns [fd426e802a394dd085a78eceaa9e1ef8b8a1729bd688f37f444eee97c2e33625] ...
	I0816 18:36:48.431108  495127 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fd426e802a394dd085a78eceaa9e1ef8b8a1729bd688f37f444eee97c2e33625"
	I0816 18:36:48.485743  495127 logs.go:123] Gathering logs for kube-proxy [c8808a95b766ea5da16f28d106fac8ed477a20a335d64d6f24a0d16441629040] ...
	I0816 18:36:48.485772  495127 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c8808a95b766ea5da16f28d106fac8ed477a20a335d64d6f24a0d16441629040"
	I0816 18:36:48.551127  495127 logs.go:123] Gathering logs for kindnet [ce04226e67d7d9412dc331daf4c7ee98a4483573ef231865fedf2e1017b08df0] ...
	I0816 18:36:48.551157  495127 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ce04226e67d7d9412dc331daf4c7ee98a4483573ef231865fedf2e1017b08df0"
	I0816 18:36:48.655233  495127 out.go:358] Setting ErrFile to fd 2...
	I0816 18:36:48.655300  495127 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0816 18:36:48.655384  495127 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0816 18:36:48.655426  495127 out.go:270]   Aug 16 18:36:22 old-k8s-version-686713 kubelet[661]: E0816 18:36:22.950180     661 pod_workers.go:191] Error syncing pod 7889e018-f267-47f4-beb4-d17b5643fc87 ("dashboard-metrics-scraper-8d5bb5db8-tkzd5_kubernetes-dashboard(7889e018-f267-47f4-beb4-d17b5643fc87)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-tkzd5_kubernetes-dashboard(7889e018-f267-47f4-beb4-d17b5643fc87)"
	  Aug 16 18:36:22 old-k8s-version-686713 kubelet[661]: E0816 18:36:22.950180     661 pod_workers.go:191] Error syncing pod 7889e018-f267-47f4-beb4-d17b5643fc87 ("dashboard-metrics-scraper-8d5bb5db8-tkzd5_kubernetes-dashboard(7889e018-f267-47f4-beb4-d17b5643fc87)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-tkzd5_kubernetes-dashboard(7889e018-f267-47f4-beb4-d17b5643fc87)"
	W0816 18:36:48.655456  495127 out.go:270]   Aug 16 18:36:28 old-k8s-version-686713 kubelet[661]: E0816 18:36:28.950473     661 pod_workers.go:191] Error syncing pod 46ffaa3d-9364-4297-9676-956836cedbf2 ("metrics-server-9975d5f86-nj5gr_kube-system(46ffaa3d-9364-4297-9676-956836cedbf2)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	  Aug 16 18:36:28 old-k8s-version-686713 kubelet[661]: E0816 18:36:28.950473     661 pod_workers.go:191] Error syncing pod 46ffaa3d-9364-4297-9676-956836cedbf2 ("metrics-server-9975d5f86-nj5gr_kube-system(46ffaa3d-9364-4297-9676-956836cedbf2)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0816 18:36:48.655505  495127 out.go:270]   Aug 16 18:36:35 old-k8s-version-686713 kubelet[661]: E0816 18:36:35.958240     661 pod_workers.go:191] Error syncing pod 7889e018-f267-47f4-beb4-d17b5643fc87 ("dashboard-metrics-scraper-8d5bb5db8-tkzd5_kubernetes-dashboard(7889e018-f267-47f4-beb4-d17b5643fc87)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-tkzd5_kubernetes-dashboard(7889e018-f267-47f4-beb4-d17b5643fc87)"
	  Aug 16 18:36:35 old-k8s-version-686713 kubelet[661]: E0816 18:36:35.958240     661 pod_workers.go:191] Error syncing pod 7889e018-f267-47f4-beb4-d17b5643fc87 ("dashboard-metrics-scraper-8d5bb5db8-tkzd5_kubernetes-dashboard(7889e018-f267-47f4-beb4-d17b5643fc87)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-tkzd5_kubernetes-dashboard(7889e018-f267-47f4-beb4-d17b5643fc87)"
	W0816 18:36:48.655540  495127 out.go:270]   Aug 16 18:36:39 old-k8s-version-686713 kubelet[661]: E0816 18:36:39.953696     661 pod_workers.go:191] Error syncing pod 46ffaa3d-9364-4297-9676-956836cedbf2 ("metrics-server-9975d5f86-nj5gr_kube-system(46ffaa3d-9364-4297-9676-956836cedbf2)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	  Aug 16 18:36:39 old-k8s-version-686713 kubelet[661]: E0816 18:36:39.953696     661 pod_workers.go:191] Error syncing pod 46ffaa3d-9364-4297-9676-956836cedbf2 ("metrics-server-9975d5f86-nj5gr_kube-system(46ffaa3d-9364-4297-9676-956836cedbf2)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0816 18:36:48.655588  495127 out.go:270]   Aug 16 18:36:47 old-k8s-version-686713 kubelet[661]: E0816 18:36:47.950244     661 pod_workers.go:191] Error syncing pod 7889e018-f267-47f4-beb4-d17b5643fc87 ("dashboard-metrics-scraper-8d5bb5db8-tkzd5_kubernetes-dashboard(7889e018-f267-47f4-beb4-d17b5643fc87)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-tkzd5_kubernetes-dashboard(7889e018-f267-47f4-beb4-d17b5643fc87)"
	  Aug 16 18:36:47 old-k8s-version-686713 kubelet[661]: E0816 18:36:47.950244     661 pod_workers.go:191] Error syncing pod 7889e018-f267-47f4-beb4-d17b5643fc87 ("dashboard-metrics-scraper-8d5bb5db8-tkzd5_kubernetes-dashboard(7889e018-f267-47f4-beb4-d17b5643fc87)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-tkzd5_kubernetes-dashboard(7889e018-f267-47f4-beb4-d17b5643fc87)"
	I0816 18:36:48.655633  495127 out.go:358] Setting ErrFile to fd 2...
	I0816 18:36:48.655654  495127 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 18:36:58.655965  495127 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I0816 18:36:58.666069  495127 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I0816 18:36:58.695930  495127 out.go:201] 
	W0816 18:36:58.722246  495127 out.go:270] X Exiting due to K8S_UNHEALTHY_CONTROL_PLANE: wait 6m0s for node: wait for healthy API server: controlPlane never updated to v1.20.0
	X Exiting due to K8S_UNHEALTHY_CONTROL_PLANE: wait 6m0s for node: wait for healthy API server: controlPlane never updated to v1.20.0
	W0816 18:36:58.722290  495127 out.go:270] * Suggestion: Control Plane could not update, try minikube delete --all --purge
	* Suggestion: Control Plane could not update, try minikube delete --all --purge
	W0816 18:36:58.722308  495127 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/11417
	* Related issue: https://github.com/kubernetes/minikube/issues/11417
	W0816 18:36:58.722314  495127 out.go:270] * 
	* 
	W0816 18:36:58.723195  495127 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0816 18:36:58.754824  495127 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-linux-arm64 start -p old-k8s-version-686713 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.20.0": exit status 102
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-686713
helpers_test.go:235: (dbg) docker inspect old-k8s-version-686713:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "bdf0a18027ec2962327c23e4774df684e0a7a0e621740bf6e74b65ddddf5bb81",
	        "Created": "2024-08-16T18:27:55.822686335Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 495328,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-08-16T18:30:48.34271542Z",
	            "FinishedAt": "2024-08-16T18:30:47.330023454Z"
	        },
	        "Image": "sha256:decdd59746a9dba10062a73f6cd4b910c7b4e60613660b1022f8357747681c4d",
	        "ResolvConfPath": "/var/lib/docker/containers/bdf0a18027ec2962327c23e4774df684e0a7a0e621740bf6e74b65ddddf5bb81/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/bdf0a18027ec2962327c23e4774df684e0a7a0e621740bf6e74b65ddddf5bb81/hostname",
	        "HostsPath": "/var/lib/docker/containers/bdf0a18027ec2962327c23e4774df684e0a7a0e621740bf6e74b65ddddf5bb81/hosts",
	        "LogPath": "/var/lib/docker/containers/bdf0a18027ec2962327c23e4774df684e0a7a0e621740bf6e74b65ddddf5bb81/bdf0a18027ec2962327c23e4774df684e0a7a0e621740bf6e74b65ddddf5bb81-json.log",
	        "Name": "/old-k8s-version-686713",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-686713:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-686713",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/fed3bfafc02aa285ab374e5eb7867a9212191f578c1805c233b67eac5671a8fb-init/diff:/var/lib/docker/overlay2/6d9ca87c64683da0141fe1f37bb6088cb89212b329dea26763f56ee455e7f801/diff",
	                "MergedDir": "/var/lib/docker/overlay2/fed3bfafc02aa285ab374e5eb7867a9212191f578c1805c233b67eac5671a8fb/merged",
	                "UpperDir": "/var/lib/docker/overlay2/fed3bfafc02aa285ab374e5eb7867a9212191f578c1805c233b67eac5671a8fb/diff",
	                "WorkDir": "/var/lib/docker/overlay2/fed3bfafc02aa285ab374e5eb7867a9212191f578c1805c233b67eac5671a8fb/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-686713",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-686713/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-686713",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-686713",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-686713",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "4b5c3003113380560cf3da615e0722f708a9571eccc4d9b8043b41b9c40d0823",
	            "SandboxKey": "/var/run/docker/netns/4b5c30031133",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33435"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33436"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33439"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33437"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33438"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-686713": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:55:02",
	                    "DriverOpts": null,
	                    "NetworkID": "ee182b23284b00c17ec8487ac50e19233058152366d2e72d62bc8898796a0c0d",
	                    "EndpointID": "02771d0af0cad957877d668fb3a7730e59dc9fc1d0bc0604d0d47a1209842900",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-686713",
	                        "bdf0a18027ec"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-686713 -n old-k8s-version-686713
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-686713 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p old-k8s-version-686713 logs -n 25: (2.940011176s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/SecondStart logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|--------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |         Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|--------------------------|---------|---------|---------------------|---------------------|
	| start   | -p cert-expiration-037923                              | cert-expiration-037923   | jenkins | v1.33.1 | 16 Aug 24 18:26 UTC | 16 Aug 24 18:27 UTC |
	|         | --memory=2048                                          |                          |         |         |                     |                     |
	|         | --cert-expiration=3m                                   |                          |         |         |                     |                     |
	|         | --driver=docker                                        |                          |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                          |         |         |                     |                     |
	| ssh     | force-systemd-env-140539                               | force-systemd-env-140539 | jenkins | v1.33.1 | 16 Aug 24 18:27 UTC | 16 Aug 24 18:27 UTC |
	|         | ssh cat                                                |                          |         |         |                     |                     |
	|         | /etc/containerd/config.toml                            |                          |         |         |                     |                     |
	| delete  | -p force-systemd-env-140539                            | force-systemd-env-140539 | jenkins | v1.33.1 | 16 Aug 24 18:27 UTC | 16 Aug 24 18:27 UTC |
	| start   | -p cert-options-650240                                 | cert-options-650240      | jenkins | v1.33.1 | 16 Aug 24 18:27 UTC | 16 Aug 24 18:27 UTC |
	|         | --memory=2048                                          |                          |         |         |                     |                     |
	|         | --apiserver-ips=127.0.0.1                              |                          |         |         |                     |                     |
	|         | --apiserver-ips=192.168.15.15                          |                          |         |         |                     |                     |
	|         | --apiserver-names=localhost                            |                          |         |         |                     |                     |
	|         | --apiserver-names=www.google.com                       |                          |         |         |                     |                     |
	|         | --apiserver-port=8555                                  |                          |         |         |                     |                     |
	|         | --driver=docker                                        |                          |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                          |         |         |                     |                     |
	| ssh     | cert-options-650240 ssh                                | cert-options-650240      | jenkins | v1.33.1 | 16 Aug 24 18:27 UTC | 16 Aug 24 18:27 UTC |
	|         | openssl x509 -text -noout -in                          |                          |         |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt                  |                          |         |         |                     |                     |
	| ssh     | -p cert-options-650240 -- sudo                         | cert-options-650240      | jenkins | v1.33.1 | 16 Aug 24 18:27 UTC | 16 Aug 24 18:27 UTC |
	|         | cat /etc/kubernetes/admin.conf                         |                          |         |         |                     |                     |
	| delete  | -p cert-options-650240                                 | cert-options-650240      | jenkins | v1.33.1 | 16 Aug 24 18:27 UTC | 16 Aug 24 18:27 UTC |
	| start   | -p old-k8s-version-686713                              | old-k8s-version-686713   | jenkins | v1.33.1 | 16 Aug 24 18:27 UTC | 16 Aug 24 18:30 UTC |
	|         | --memory=2200                                          |                          |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                          |         |         |                     |                     |
	|         | --kvm-network=default                                  |                          |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                          |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                          |         |         |                     |                     |
	|         | --keep-context=false                                   |                          |         |         |                     |                     |
	|         | --driver=docker                                        |                          |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                          |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                          |         |         |                     |                     |
	| start   | -p cert-expiration-037923                              | cert-expiration-037923   | jenkins | v1.33.1 | 16 Aug 24 18:30 UTC | 16 Aug 24 18:30 UTC |
	|         | --memory=2048                                          |                          |         |         |                     |                     |
	|         | --cert-expiration=8760h                                |                          |         |         |                     |                     |
	|         | --driver=docker                                        |                          |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                          |         |         |                     |                     |
	| delete  | -p cert-expiration-037923                              | cert-expiration-037923   | jenkins | v1.33.1 | 16 Aug 24 18:30 UTC | 16 Aug 24 18:30 UTC |
	| start   | -p no-preload-691813                                   | no-preload-691813        | jenkins | v1.33.1 | 16 Aug 24 18:30 UTC | 16 Aug 24 18:31 UTC |
	|         | --memory=2200                                          |                          |         |         |                     |                     |
	|         | --alsologtostderr                                      |                          |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                          |         |         |                     |                     |
	|         | --driver=docker                                        |                          |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                          |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                          |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-686713        | old-k8s-version-686713   | jenkins | v1.33.1 | 16 Aug 24 18:30 UTC | 16 Aug 24 18:30 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                          |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                          |         |         |                     |                     |
	| stop    | -p old-k8s-version-686713                              | old-k8s-version-686713   | jenkins | v1.33.1 | 16 Aug 24 18:30 UTC | 16 Aug 24 18:30 UTC |
	|         | --alsologtostderr -v=3                                 |                          |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-686713             | old-k8s-version-686713   | jenkins | v1.33.1 | 16 Aug 24 18:30 UTC | 16 Aug 24 18:30 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                          |         |         |                     |                     |
	| start   | -p old-k8s-version-686713                              | old-k8s-version-686713   | jenkins | v1.33.1 | 16 Aug 24 18:30 UTC |                     |
	|         | --memory=2200                                          |                          |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                          |         |         |                     |                     |
	|         | --kvm-network=default                                  |                          |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                          |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                          |         |         |                     |                     |
	|         | --keep-context=false                                   |                          |         |         |                     |                     |
	|         | --driver=docker                                        |                          |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                          |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                          |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-691813             | no-preload-691813        | jenkins | v1.33.1 | 16 Aug 24 18:31 UTC | 16 Aug 24 18:31 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                          |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                          |         |         |                     |                     |
	| stop    | -p no-preload-691813                                   | no-preload-691813        | jenkins | v1.33.1 | 16 Aug 24 18:31 UTC | 16 Aug 24 18:32 UTC |
	|         | --alsologtostderr -v=3                                 |                          |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-691813                  | no-preload-691813        | jenkins | v1.33.1 | 16 Aug 24 18:32 UTC | 16 Aug 24 18:32 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                          |         |         |                     |                     |
	| start   | -p no-preload-691813                                   | no-preload-691813        | jenkins | v1.33.1 | 16 Aug 24 18:32 UTC | 16 Aug 24 18:36 UTC |
	|         | --memory=2200                                          |                          |         |         |                     |                     |
	|         | --alsologtostderr                                      |                          |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                          |         |         |                     |                     |
	|         | --driver=docker                                        |                          |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                          |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                          |         |         |                     |                     |
	| image   | no-preload-691813 image list                           | no-preload-691813        | jenkins | v1.33.1 | 16 Aug 24 18:36 UTC | 16 Aug 24 18:36 UTC |
	|         | --format=json                                          |                          |         |         |                     |                     |
	| pause   | -p no-preload-691813                                   | no-preload-691813        | jenkins | v1.33.1 | 16 Aug 24 18:36 UTC | 16 Aug 24 18:36 UTC |
	|         | --alsologtostderr -v=1                                 |                          |         |         |                     |                     |
	| unpause | -p no-preload-691813                                   | no-preload-691813        | jenkins | v1.33.1 | 16 Aug 24 18:36 UTC | 16 Aug 24 18:36 UTC |
	|         | --alsologtostderr -v=1                                 |                          |         |         |                     |                     |
	| delete  | -p no-preload-691813                                   | no-preload-691813        | jenkins | v1.33.1 | 16 Aug 24 18:36 UTC | 16 Aug 24 18:36 UTC |
	| delete  | -p no-preload-691813                                   | no-preload-691813        | jenkins | v1.33.1 | 16 Aug 24 18:36 UTC | 16 Aug 24 18:36 UTC |
	| start   | -p embed-certs-403200                                  | embed-certs-403200       | jenkins | v1.33.1 | 16 Aug 24 18:36 UTC |                     |
	|         | --memory=2200                                          |                          |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                          |         |         |                     |                     |
	|         | --embed-certs --driver=docker                          |                          |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                          |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                          |         |         |                     |                     |
	|---------|--------------------------------------------------------|--------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/16 18:36:53
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.22.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0816 18:36:53.764146  506546 out.go:345] Setting OutFile to fd 1 ...
	I0816 18:36:53.764334  506546 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 18:36:53.764348  506546 out.go:358] Setting ErrFile to fd 2...
	I0816 18:36:53.764354  506546 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 18:36:53.764637  506546 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19461-287979/.minikube/bin
	I0816 18:36:53.765142  506546 out.go:352] Setting JSON to false
	I0816 18:36:53.766204  506546 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":8344,"bootTime":1723825070,"procs":221,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0816 18:36:53.766279  506546 start.go:139] virtualization:  
	I0816 18:36:53.769138  506546 out.go:177] * [embed-certs-403200] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	I0816 18:36:53.771262  506546 out.go:177]   - MINIKUBE_LOCATION=19461
	I0816 18:36:53.771328  506546 notify.go:220] Checking for updates...
	I0816 18:36:53.774838  506546 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0816 18:36:53.776747  506546 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19461-287979/kubeconfig
	I0816 18:36:53.778957  506546 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19461-287979/.minikube
	I0816 18:36:53.780810  506546 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0816 18:36:53.782546  506546 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0816 18:36:53.785274  506546 config.go:182] Loaded profile config "old-k8s-version-686713": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
	I0816 18:36:53.785450  506546 driver.go:392] Setting default libvirt URI to qemu:///system
	I0816 18:36:53.819927  506546 docker.go:123] docker version: linux-27.1.2:Docker Engine - Community
	I0816 18:36:53.820036  506546 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0816 18:36:53.879613  506546 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:34 OomKillDisable:true NGoroutines:53 SystemTime:2024-08-16 18:36:53.870187712 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1067-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214900736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8fc6bcff51318944179630522a095cc9dbf9f353 Expected:8fc6bcff51318944179630522a095cc9dbf9f353} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1]] Warnings:<nil>}}
	I0816 18:36:53.879727  506546 docker.go:307] overlay module found
	I0816 18:36:53.883279  506546 out.go:177] * Using the docker driver based on user configuration
	I0816 18:36:53.885131  506546 start.go:297] selected driver: docker
	I0816 18:36:53.885150  506546 start.go:901] validating driver "docker" against <nil>
	I0816 18:36:53.885164  506546 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0816 18:36:53.885809  506546 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0816 18:36:53.938024  506546 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:34 OomKillDisable:true NGoroutines:53 SystemTime:2024-08-16 18:36:53.928422993 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1067-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214900736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8fc6bcff51318944179630522a095cc9dbf9f353 Expected:8fc6bcff51318944179630522a095cc9dbf9f353} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1]] Warnings:<nil>}}
	I0816 18:36:53.938216  506546 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0816 18:36:53.938448  506546 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0816 18:36:53.940503  506546 out.go:177] * Using Docker driver with root privileges
	I0816 18:36:53.942424  506546 cni.go:84] Creating CNI manager for ""
	I0816 18:36:53.942452  506546 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0816 18:36:53.942474  506546 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0816 18:36:53.942567  506546 start.go:340] cluster config:
	{Name:embed-certs-403200 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:embed-certs-403200 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: Stati
cIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0816 18:36:53.946867  506546 out.go:177] * Starting "embed-certs-403200" primary control-plane node in "embed-certs-403200" cluster
	I0816 18:36:53.950000  506546 cache.go:121] Beginning downloading kic base image for docker with containerd
	I0816 18:36:53.952957  506546 out.go:177] * Pulling base image v0.0.44-1723740748-19452 ...
	I0816 18:36:53.955720  506546 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime containerd
	I0816 18:36:53.955805  506546 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19461-287979/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-containerd-overlay2-arm64.tar.lz4
	I0816 18:36:53.955825  506546 cache.go:56] Caching tarball of preloaded images
	I0816 18:36:53.955919  506546 preload.go:172] Found /home/jenkins/minikube-integration/19461-287979/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I0816 18:36:53.955934  506546 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on containerd
	I0816 18:36:53.956007  506546 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d in local docker daemon
	I0816 18:36:53.956574  506546 profile.go:143] Saving config to /home/jenkins/minikube-integration/19461-287979/.minikube/profiles/embed-certs-403200/config.json ...
	I0816 18:36:53.956614  506546 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19461-287979/.minikube/profiles/embed-certs-403200/config.json: {Name:mk8a44b116cf9db5ae5586555f099406fb6020c0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	W0816 18:36:53.977049  506546 image.go:95] image gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d is of wrong architecture
	I0816 18:36:53.977071  506546 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d to local cache
	I0816 18:36:53.977145  506546 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d in local cache directory
	I0816 18:36:53.977168  506546 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d in local cache directory, skipping pull
	I0816 18:36:53.977177  506546 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d exists in cache, skipping pull
	I0816 18:36:53.977185  506546 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d as a tarball
	I0816 18:36:53.977191  506546 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d from local cache
	I0816 18:36:54.115224  506546 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d from cached tarball
	I0816 18:36:54.115283  506546 cache.go:194] Successfully downloaded all kic artifacts
	I0816 18:36:54.115326  506546 start.go:360] acquireMachinesLock for embed-certs-403200: {Name:mkf9269e0cd0969fbc1285ac1a4db384ab80ec34 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 18:36:54.115866  506546 start.go:364] duration metric: took 511.687µs to acquireMachinesLock for "embed-certs-403200"
	I0816 18:36:54.115909  506546 start.go:93] Provisioning new machine with config: &{Name:embed-certs-403200 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:embed-certs-403200 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableM
etrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0816 18:36:54.115996  506546 start.go:125] createHost starting for "" (driver="docker")
	I0816 18:36:58.655965  495127 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I0816 18:36:58.666069  495127 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I0816 18:36:58.695930  495127 out.go:201] 
	W0816 18:36:58.722246  495127 out.go:270] X Exiting due to K8S_UNHEALTHY_CONTROL_PLANE: wait 6m0s for node: wait for healthy API server: controlPlane never updated to v1.20.0
	W0816 18:36:58.722290  495127 out.go:270] * Suggestion: Control Plane could not update, try minikube delete --all --purge
	W0816 18:36:58.722308  495127 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/11417
	W0816 18:36:58.722314  495127 out.go:270] * 
	W0816 18:36:58.723195  495127 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0816 18:36:58.754824  495127 out.go:201] 
	I0816 18:36:54.118517  506546 out.go:235] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0816 18:36:54.118780  506546 start.go:159] libmachine.API.Create for "embed-certs-403200" (driver="docker")
	I0816 18:36:54.118818  506546 client.go:168] LocalClient.Create starting
	I0816 18:36:54.118897  506546 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19461-287979/.minikube/certs/ca.pem
	I0816 18:36:54.118936  506546 main.go:141] libmachine: Decoding PEM data...
	I0816 18:36:54.118957  506546 main.go:141] libmachine: Parsing certificate...
	I0816 18:36:54.119024  506546 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19461-287979/.minikube/certs/cert.pem
	I0816 18:36:54.119054  506546 main.go:141] libmachine: Decoding PEM data...
	I0816 18:36:54.119071  506546 main.go:141] libmachine: Parsing certificate...
	I0816 18:36:54.119504  506546 cli_runner.go:164] Run: docker network inspect embed-certs-403200 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0816 18:36:54.137122  506546 cli_runner.go:211] docker network inspect embed-certs-403200 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0816 18:36:54.137295  506546 network_create.go:284] running [docker network inspect embed-certs-403200] to gather additional debugging logs...
	I0816 18:36:54.137327  506546 cli_runner.go:164] Run: docker network inspect embed-certs-403200
	W0816 18:36:54.167053  506546 cli_runner.go:211] docker network inspect embed-certs-403200 returned with exit code 1
	I0816 18:36:54.167081  506546 network_create.go:287] error running [docker network inspect embed-certs-403200]: docker network inspect embed-certs-403200: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network embed-certs-403200 not found
	I0816 18:36:54.167095  506546 network_create.go:289] output of [docker network inspect embed-certs-403200]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network embed-certs-403200 not found
	
	** /stderr **
	I0816 18:36:54.167202  506546 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0816 18:36:54.183772  506546 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-0f3b60dee0f8 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:03:a8:78:fd} reservation:<nil>}
	I0816 18:36:54.184207  506546 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-41b8fbc4cf24 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:02:42:48:4c:ac:fc} reservation:<nil>}
	I0816 18:36:54.184516  506546 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-123b29d4aa8e IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:02:42:96:35:52:f7} reservation:<nil>}
	I0816 18:36:54.185025  506546 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001874d90}
	I0816 18:36:54.185047  506546 network_create.go:124] attempt to create docker network embed-certs-403200 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I0816 18:36:54.185114  506546 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=embed-certs-403200 embed-certs-403200
	I0816 18:36:54.257868  506546 network_create.go:108] docker network embed-certs-403200 192.168.76.0/24 created
	I0816 18:36:54.257906  506546 kic.go:121] calculated static IP "192.168.76.2" for the "embed-certs-403200" container
	I0816 18:36:54.257977  506546 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0816 18:36:54.272637  506546 cli_runner.go:164] Run: docker volume create embed-certs-403200 --label name.minikube.sigs.k8s.io=embed-certs-403200 --label created_by.minikube.sigs.k8s.io=true
	I0816 18:36:54.289139  506546 oci.go:103] Successfully created a docker volume embed-certs-403200
	I0816 18:36:54.289232  506546 cli_runner.go:164] Run: docker run --rm --name embed-certs-403200-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-403200 --entrypoint /usr/bin/test -v embed-certs-403200:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d -d /var/lib
	I0816 18:36:54.914847  506546 oci.go:107] Successfully prepared a docker volume embed-certs-403200
	I0816 18:36:54.914901  506546 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime containerd
	I0816 18:36:54.914935  506546 kic.go:194] Starting extracting preloaded images to volume ...
	I0816 18:36:54.915050  506546 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19461-287979/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v embed-certs-403200:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d -I lz4 -xf /preloaded.tar -C /extractDir
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                        ATTEMPT             POD ID              POD
	56abb184690fa       523cad1a4df73       2 minutes ago       Exited              dashboard-metrics-scraper   5                   3cb3b0dc915f6       dashboard-metrics-scraper-8d5bb5db8-tkzd5
	83541051f882d       ba04bb24b9575       4 minutes ago       Running             storage-provisioner         2                   9590facf18a8e       storage-provisioner
	3946f4bb419f3       20b332c9a70d8       5 minutes ago       Running             kubernetes-dashboard        0                   1d3c360759d15       kubernetes-dashboard-cd95d586-d9dhg
	ff564620ddb48       1611cd07b61d5       5 minutes ago       Running             busybox                     1                   474425fa5db93       busybox
	e1b93070d9316       db91994f4ee8f       5 minutes ago       Running             coredns                     1                   04b3dbd85da5a       coredns-74ff55c5b-vdt9d
	70e77ee9f2cac       25a5233254979       5 minutes ago       Running             kube-proxy                  1                   363d978a15afd       kube-proxy-d2sb2
	ce04226e67d7d       6a23fa8fd2b78       5 minutes ago       Running             kindnet-cni                 1                   e46155f2de0ac       kindnet-b9ptk
	761eeb094020a       ba04bb24b9575       5 minutes ago       Exited              storage-provisioner         1                   9590facf18a8e       storage-provisioner
	67a319b21b20b       e7605f88f17d6       5 minutes ago       Running             kube-scheduler              1                   3629f1a6f1393       kube-scheduler-old-k8s-version-686713
	c9fb62cfb5636       2c08bbbc02d3a       5 minutes ago       Running             kube-apiserver              1                   11ff8002b6ca8       kube-apiserver-old-k8s-version-686713
	2f9ca8dbad25d       05b738aa1bc63       5 minutes ago       Running             etcd                        1                   7e50e24633c81       etcd-old-k8s-version-686713
	6dc4fb429749d       1df8a2b116bd1       5 minutes ago       Running             kube-controller-manager     1                   70042d47a16b3       kube-controller-manager-old-k8s-version-686713
	d44f3db419150       1611cd07b61d5       6 minutes ago       Exited              busybox                     0                   a78fc2915708f       busybox
	fd426e802a394       db91994f4ee8f       7 minutes ago       Exited              coredns                     0                   ee43641609c11       coredns-74ff55c5b-vdt9d
	b1c5cc5a08d84       6a23fa8fd2b78       8 minutes ago       Exited              kindnet-cni                 0                   7a97438417bcb       kindnet-b9ptk
	c8808a95b766e       25a5233254979       8 minutes ago       Exited              kube-proxy                  0                   685909fd96c4b       kube-proxy-d2sb2
	823d92b3aa088       05b738aa1bc63       8 minutes ago       Exited              etcd                        0                   dcfb06b060a20       etcd-old-k8s-version-686713
	b5415dfeafde6       e7605f88f17d6       8 minutes ago       Exited              kube-scheduler              0                   697e1c402df4c       kube-scheduler-old-k8s-version-686713
	78be83580cf24       1df8a2b116bd1       8 minutes ago       Exited              kube-controller-manager     0                   d294cdbc2e1ec       kube-controller-manager-old-k8s-version-686713
	447b982d5e55f       2c08bbbc02d3a       8 minutes ago       Exited              kube-apiserver              0                   ab265ce493feb       kube-apiserver-old-k8s-version-686713
	
	
	==> containerd <==
	Aug 16 18:32:51 old-k8s-version-686713 containerd[567]: time="2024-08-16T18:32:51.965105096Z" level=info msg="trying next host" error="failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host" host=fake.domain
	Aug 16 18:32:51 old-k8s-version-686713 containerd[567]: time="2024-08-16T18:32:51.967005201Z" level=error msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\" failed" error="failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	Aug 16 18:32:51 old-k8s-version-686713 containerd[567]: time="2024-08-16T18:32:51.967191318Z" level=info msg="stop pulling image fake.domain/registry.k8s.io/echoserver:1.4: active requests=0, bytes read=0"
	Aug 16 18:33:04 old-k8s-version-686713 containerd[567]: time="2024-08-16T18:33:04.952648710Z" level=info msg="CreateContainer within sandbox \"3cb3b0dc915f6e76bc161311ce91ace8deac07f67585e073577e9ac5eecb34fd\" for container name:\"dashboard-metrics-scraper\" attempt:4"
	Aug 16 18:33:04 old-k8s-version-686713 containerd[567]: time="2024-08-16T18:33:04.993443531Z" level=info msg="CreateContainer within sandbox \"3cb3b0dc915f6e76bc161311ce91ace8deac07f67585e073577e9ac5eecb34fd\" for name:\"dashboard-metrics-scraper\" attempt:4 returns container id \"035ce60f756946f475b79e445e6fd1d60d64897bca9a5472801efdf9a22dfbfd\""
	Aug 16 18:33:04 old-k8s-version-686713 containerd[567]: time="2024-08-16T18:33:04.994142426Z" level=info msg="StartContainer for \"035ce60f756946f475b79e445e6fd1d60d64897bca9a5472801efdf9a22dfbfd\""
	Aug 16 18:33:05 old-k8s-version-686713 containerd[567]: time="2024-08-16T18:33:05.071926050Z" level=info msg="StartContainer for \"035ce60f756946f475b79e445e6fd1d60d64897bca9a5472801efdf9a22dfbfd\" returns successfully"
	Aug 16 18:33:05 old-k8s-version-686713 containerd[567]: time="2024-08-16T18:33:05.113823539Z" level=info msg="shim disconnected" id=035ce60f756946f475b79e445e6fd1d60d64897bca9a5472801efdf9a22dfbfd namespace=k8s.io
	Aug 16 18:33:05 old-k8s-version-686713 containerd[567]: time="2024-08-16T18:33:05.114064236Z" level=warning msg="cleaning up after shim disconnected" id=035ce60f756946f475b79e445e6fd1d60d64897bca9a5472801efdf9a22dfbfd namespace=k8s.io
	Aug 16 18:33:05 old-k8s-version-686713 containerd[567]: time="2024-08-16T18:33:05.114094546Z" level=info msg="cleaning up dead shim" namespace=k8s.io
	Aug 16 18:33:05 old-k8s-version-686713 containerd[567]: time="2024-08-16T18:33:05.926596104Z" level=info msg="RemoveContainer for \"07320d617042a04955cc18b820b7f8f7528bf1e9e47ba308dfc674701663408d\""
	Aug 16 18:33:05 old-k8s-version-686713 containerd[567]: time="2024-08-16T18:33:05.932951244Z" level=info msg="RemoveContainer for \"07320d617042a04955cc18b820b7f8f7528bf1e9e47ba308dfc674701663408d\" returns successfully"
	Aug 16 18:34:23 old-k8s-version-686713 containerd[567]: time="2024-08-16T18:34:23.951754217Z" level=info msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Aug 16 18:34:23 old-k8s-version-686713 containerd[567]: time="2024-08-16T18:34:23.957780020Z" level=info msg="trying next host" error="failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host" host=fake.domain
	Aug 16 18:34:23 old-k8s-version-686713 containerd[567]: time="2024-08-16T18:34:23.959301468Z" level=error msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\" failed" error="failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	Aug 16 18:34:23 old-k8s-version-686713 containerd[567]: time="2024-08-16T18:34:23.960404907Z" level=info msg="stop pulling image fake.domain/registry.k8s.io/echoserver:1.4: active requests=0, bytes read=0"
	Aug 16 18:34:30 old-k8s-version-686713 containerd[567]: time="2024-08-16T18:34:30.952357372Z" level=info msg="CreateContainer within sandbox \"3cb3b0dc915f6e76bc161311ce91ace8deac07f67585e073577e9ac5eecb34fd\" for container name:\"dashboard-metrics-scraper\" attempt:5"
	Aug 16 18:34:30 old-k8s-version-686713 containerd[567]: time="2024-08-16T18:34:30.967407878Z" level=info msg="CreateContainer within sandbox \"3cb3b0dc915f6e76bc161311ce91ace8deac07f67585e073577e9ac5eecb34fd\" for name:\"dashboard-metrics-scraper\" attempt:5 returns container id \"56abb184690fa79e81480b282a4404f86c94be80ab074e8ef6880064f8e40d9b\""
	Aug 16 18:34:30 old-k8s-version-686713 containerd[567]: time="2024-08-16T18:34:30.968020980Z" level=info msg="StartContainer for \"56abb184690fa79e81480b282a4404f86c94be80ab074e8ef6880064f8e40d9b\""
	Aug 16 18:34:31 old-k8s-version-686713 containerd[567]: time="2024-08-16T18:34:31.041099509Z" level=info msg="StartContainer for \"56abb184690fa79e81480b282a4404f86c94be80ab074e8ef6880064f8e40d9b\" returns successfully"
	Aug 16 18:34:31 old-k8s-version-686713 containerd[567]: time="2024-08-16T18:34:31.066410061Z" level=info msg="shim disconnected" id=56abb184690fa79e81480b282a4404f86c94be80ab074e8ef6880064f8e40d9b namespace=k8s.io
	Aug 16 18:34:31 old-k8s-version-686713 containerd[567]: time="2024-08-16T18:34:31.066468604Z" level=warning msg="cleaning up after shim disconnected" id=56abb184690fa79e81480b282a4404f86c94be80ab074e8ef6880064f8e40d9b namespace=k8s.io
	Aug 16 18:34:31 old-k8s-version-686713 containerd[567]: time="2024-08-16T18:34:31.066480100Z" level=info msg="cleaning up dead shim" namespace=k8s.io
	Aug 16 18:34:31 old-k8s-version-686713 containerd[567]: time="2024-08-16T18:34:31.203596242Z" level=info msg="RemoveContainer for \"035ce60f756946f475b79e445e6fd1d60d64897bca9a5472801efdf9a22dfbfd\""
	Aug 16 18:34:31 old-k8s-version-686713 containerd[567]: time="2024-08-16T18:34:31.209816506Z" level=info msg="RemoveContainer for \"035ce60f756946f475b79e445e6fd1d60d64897bca9a5472801efdf9a22dfbfd\" returns successfully"
	
	
	==> coredns [e1b93070d9316ad4002ac51e29d82e3750f6b8571f5b5fd96d31020533ec28df] <==
	.:53
	[INFO] plugin/reload: Running configuration MD5 = 093a0bf1423dd8c4eee62372bb216168
	CoreDNS-1.7.0
	linux/arm64, go1.14.4, f59c03d
	[INFO] 127.0.0.1:56012 - 23466 "HINFO IN 7845767213633560759.6223275425442288858. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.031988778s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	I0816 18:31:50.350146       1 trace.go:116] Trace[2019727887]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125 (started: 2024-08-16 18:31:20.349512399 +0000 UTC m=+0.037249087) (total time: 30.000513531s):
	Trace[2019727887]: [30.000513531s] [30.000513531s] END
	E0816 18:31:50.350272       1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	I0816 18:31:50.350449       1 trace.go:116] Trace[939984059]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125 (started: 2024-08-16 18:31:20.350004328 +0000 UTC m=+0.037741016) (total time: 30.000431373s):
	Trace[939984059]: [30.000431373s] [30.000431373s] END
	E0816 18:31:50.350489       1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	I0816 18:31:50.350881       1 trace.go:116] Trace[911902081]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125 (started: 2024-08-16 18:31:20.350263059 +0000 UTC m=+0.037999748) (total time: 30.000601621s):
	Trace[911902081]: [30.000601621s] [30.000601621s] END
	E0816 18:31:50.350898       1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Endpoints: Get "https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> coredns [fd426e802a394dd085a78eceaa9e1ef8b8a1729bd688f37f444eee97c2e33625] <==
	.:53
	[INFO] plugin/reload: Running configuration MD5 = 093a0bf1423dd8c4eee62372bb216168
	CoreDNS-1.7.0
	linux/arm64, go1.14.4, f59c03d
	[INFO] 127.0.0.1:45865 - 35834 "HINFO IN 2039206084625571909.861868764618788027. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.024537849s
	
	
	==> describe nodes <==
	Name:               old-k8s-version-686713
	Roles:              control-plane,master
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=old-k8s-version-686713
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=8789c54b9bc6db8e66c461a83302d5a0be0abbdd
	                    minikube.k8s.io/name=old-k8s-version-686713
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_16T18_28_34_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node-role.kubernetes.io/master=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 16 Aug 2024 18:28:30 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-686713
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 16 Aug 2024 18:36:59 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 16 Aug 2024 18:32:07 +0000   Fri, 16 Aug 2024 18:28:23 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 16 Aug 2024 18:32:07 +0000   Fri, 16 Aug 2024 18:28:23 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 16 Aug 2024 18:32:07 +0000   Fri, 16 Aug 2024 18:28:23 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 16 Aug 2024 18:32:07 +0000   Fri, 16 Aug 2024 18:31:17 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    old-k8s-version-686713
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022364Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022364Ki
	  pods:               110
	System Info:
	  Machine ID:                 0a04b556dfa646fba1a10acd63f80f6d
	  System UUID:                0424d917-7c69-4dd7-b500-69a58a159e79
	  Boot ID:                    6cf3c121-8478-4b33-820f-e176429c0afc
	  Kernel Version:             5.15.0-1067-aws
	  OS Image:                   Ubuntu 22.04.4 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  containerd://1.7.20
	  Kubelet Version:            v1.20.0
	  Kube-Proxy Version:         v1.20.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m37s
	  kube-system                 coredns-74ff55c5b-vdt9d                           100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     8m12s
	  kube-system                 etcd-old-k8s-version-686713                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         8m19s
	  kube-system                 kindnet-b9ptk                                     100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      8m12s
	  kube-system                 kube-apiserver-old-k8s-version-686713             250m (12%)    0 (0%)      0 (0%)           0 (0%)         8m19s
	  kube-system                 kube-controller-manager-old-k8s-version-686713    200m (10%)    0 (0%)      0 (0%)           0 (0%)         8m19s
	  kube-system                 kube-proxy-d2sb2                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m12s
	  kube-system                 kube-scheduler-old-k8s-version-686713             100m (5%)     0 (0%)      0 (0%)           0 (0%)         8m19s
	  kube-system                 metrics-server-9975d5f86-nj5gr                    100m (5%)     0 (0%)      200Mi (2%)       0 (0%)         6m27s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m11s
	  kubernetes-dashboard        dashboard-metrics-scraper-8d5bb5db8-tkzd5         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m25s
	  kubernetes-dashboard        kubernetes-dashboard-cd95d586-d9dhg               0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m25s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (47%)  100m (5%)
	  memory             420Mi (5%)  220Mi (2%)
	  ephemeral-storage  100Mi (0%)  0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From        Message
	  ----    ------                   ----                   ----        -------
	  Normal  NodeHasSufficientMemory  8m39s (x5 over 8m39s)  kubelet     Node old-k8s-version-686713 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m39s (x5 over 8m39s)  kubelet     Node old-k8s-version-686713 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m39s (x5 over 8m39s)  kubelet     Node old-k8s-version-686713 status is now: NodeHasSufficientPID
	  Normal  Starting                 8m19s                  kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  8m19s                  kubelet     Node old-k8s-version-686713 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m19s                  kubelet     Node old-k8s-version-686713 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m19s                  kubelet     Node old-k8s-version-686713 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  8m19s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                8m12s                  kubelet     Node old-k8s-version-686713 status is now: NodeReady
	  Normal  Starting                 8m11s                  kube-proxy  Starting kube-proxy.
	  Normal  Starting                 5m58s                  kubelet     Starting kubelet.
	  Normal  NodeAllocatableEnforced  5m58s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  5m57s (x8 over 5m58s)  kubelet     Node old-k8s-version-686713 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m57s (x8 over 5m58s)  kubelet     Node old-k8s-version-686713 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m57s (x7 over 5m58s)  kubelet     Node old-k8s-version-686713 status is now: NodeHasSufficientPID
	  Normal  Starting                 5m41s                  kube-proxy  Starting kube-proxy.
	
	
	==> dmesg <==
	[Aug16 17:13] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	
	
	==> etcd [2f9ca8dbad25d5c189e46bfb5b1c0fb6ba3bfe49b1436a6da5f276e21a4bf6e9] <==
	2024-08-16 18:32:53.971392 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-08-16 18:33:03.971362 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-08-16 18:33:13.971271 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-08-16 18:33:23.971341 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-08-16 18:33:33.971549 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-08-16 18:33:43.971474 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-08-16 18:33:53.971252 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-08-16 18:34:03.971285 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-08-16 18:34:13.971489 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-08-16 18:34:23.971291 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-08-16 18:34:33.971667 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-08-16 18:34:43.972460 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-08-16 18:34:53.971258 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-08-16 18:35:03.971508 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-08-16 18:35:13.971300 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-08-16 18:35:23.971279 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-08-16 18:35:33.971303 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-08-16 18:35:43.971458 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-08-16 18:35:53.971427 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-08-16 18:36:03.971291 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-08-16 18:36:13.972041 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-08-16 18:36:23.971612 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-08-16 18:36:33.971247 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-08-16 18:36:43.971324 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-08-16 18:36:53.971438 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	
	
	==> etcd [823d92b3aa088f0ae2ac13e28669fcb44185774a0ec41c94a4f70cb9d841f330] <==
	raft2024/08/16 18:28:23 INFO: 9f0758e1c58a86ed is starting a new election at term 1
	raft2024/08/16 18:28:23 INFO: 9f0758e1c58a86ed became candidate at term 2
	raft2024/08/16 18:28:23 INFO: 9f0758e1c58a86ed received MsgVoteResp from 9f0758e1c58a86ed at term 2
	raft2024/08/16 18:28:23 INFO: 9f0758e1c58a86ed became leader at term 2
	raft2024/08/16 18:28:23 INFO: raft.node: 9f0758e1c58a86ed elected leader 9f0758e1c58a86ed at term 2
	2024-08-16 18:28:23.777254 I | etcdserver: setting up the initial cluster version to 3.4
	2024-08-16 18:28:23.778184 N | etcdserver/membership: set the initial cluster version to 3.4
	2024-08-16 18:28:23.778342 I | etcdserver/api: enabled capabilities for version 3.4
	2024-08-16 18:28:23.778476 I | etcdserver: published {Name:old-k8s-version-686713 ClientURLs:[https://192.168.85.2:2379]} to cluster 68eaea490fab4e05
	2024-08-16 18:28:23.778697 I | embed: ready to serve client requests
	2024-08-16 18:28:23.780259 I | embed: serving client requests on 192.168.85.2:2379
	2024-08-16 18:28:23.823245 I | embed: ready to serve client requests
	2024-08-16 18:28:23.853323 I | embed: serving client requests on 127.0.0.1:2379
	2024-08-16 18:28:44.476214 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-08-16 18:28:50.512851 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-08-16 18:29:00.512891 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-08-16 18:29:10.512805 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-08-16 18:29:20.512737 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-08-16 18:29:30.512916 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-08-16 18:29:40.512819 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-08-16 18:29:50.512792 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-08-16 18:30:00.513145 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-08-16 18:30:10.512869 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-08-16 18:30:20.513108 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-08-16 18:30:30.512642 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	
	
	==> kernel <==
	 18:37:01 up  2:19,  0 users,  load average: 1.07, 2.17, 2.71
	Linux old-k8s-version-686713 5.15.0-1067-aws #73~20.04.1-Ubuntu SMP Wed Jul 24 17:31:05 UTC 2024 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.4 LTS"
	
	
	==> kindnet [b1c5cc5a08d84ccf10b870b1da860bd7634039448156c7111a600e3a79596432] <==
	E0816 18:29:27.129936       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.NetworkPolicy: failed to list *v1.NetworkPolicy: networkpolicies.networking.k8s.io is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "networkpolicies" in API group "networking.k8s.io" at the cluster scope
	W0816 18:29:30.876628       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "namespaces" in API group "" at the cluster scope
	E0816 18:29:30.876667       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "namespaces" in API group "" at the cluster scope
	W0816 18:29:31.567584       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: failed to list *v1.Pod: pods is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "pods" in API group "" at the cluster scope
	E0816 18:29:31.567619       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "pods" in API group "" at the cluster scope
	I0816 18:29:32.618044       1 main.go:295] Handling node with IPs: map[192.168.85.2:{}]
	I0816 18:29:32.618083       1 main.go:299] handling current node
	I0816 18:29:42.618789       1 main.go:295] Handling node with IPs: map[192.168.85.2:{}]
	I0816 18:29:42.618829       1 main.go:299] handling current node
	I0816 18:29:52.617935       1 main.go:295] Handling node with IPs: map[192.168.85.2:{}]
	I0816 18:29:52.617970       1 main.go:299] handling current node
	W0816 18:29:58.248057       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: failed to list *v1.NetworkPolicy: networkpolicies.networking.k8s.io is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "networkpolicies" in API group "networking.k8s.io" at the cluster scope
	E0816 18:29:58.248090       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.NetworkPolicy: failed to list *v1.NetworkPolicy: networkpolicies.networking.k8s.io is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "networkpolicies" in API group "networking.k8s.io" at the cluster scope
	W0816 18:29:59.638295       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: failed to list *v1.Pod: pods is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "pods" in API group "" at the cluster scope
	E0816 18:29:59.638348       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "pods" in API group "" at the cluster scope
	I0816 18:30:02.618486       1 main.go:295] Handling node with IPs: map[192.168.85.2:{}]
	I0816 18:30:02.618531       1 main.go:299] handling current node
	W0816 18:30:07.488932       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "namespaces" in API group "" at the cluster scope
	E0816 18:30:07.489193       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "namespaces" in API group "" at the cluster scope
	I0816 18:30:12.618339       1 main.go:295] Handling node with IPs: map[192.168.85.2:{}]
	I0816 18:30:12.618380       1 main.go:299] handling current node
	I0816 18:30:22.618166       1 main.go:295] Handling node with IPs: map[192.168.85.2:{}]
	I0816 18:30:22.618206       1 main.go:299] handling current node
	I0816 18:30:32.618297       1 main.go:295] Handling node with IPs: map[192.168.85.2:{}]
	I0816 18:30:32.618336       1 main.go:299] handling current node
	
	
	==> kindnet [ce04226e67d7d9412dc331daf4c7ee98a4483573ef231865fedf2e1017b08df0] <==
	I0816 18:35:50.819035       1 main.go:299] handling current node
	W0816 18:35:53.331422       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "namespaces" in API group "" at the cluster scope
	E0816 18:35:53.331456       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "namespaces" in API group "" at the cluster scope
	I0816 18:36:00.819433       1 main.go:295] Handling node with IPs: map[192.168.85.2:{}]
	I0816 18:36:00.819475       1 main.go:299] handling current node
	I0816 18:36:10.819242       1 main.go:295] Handling node with IPs: map[192.168.85.2:{}]
	I0816 18:36:10.819280       1 main.go:299] handling current node
	W0816 18:36:12.902593       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: failed to list *v1.Pod: pods is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "pods" in API group "" at the cluster scope
	E0816 18:36:12.902879       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "pods" in API group "" at the cluster scope
	I0816 18:36:20.818905       1 main.go:295] Handling node with IPs: map[192.168.85.2:{}]
	I0816 18:36:20.818945       1 main.go:299] handling current node
	I0816 18:36:30.819815       1 main.go:295] Handling node with IPs: map[192.168.85.2:{}]
	I0816 18:36:30.819854       1 main.go:299] handling current node
	W0816 18:36:31.037836       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: failed to list *v1.NetworkPolicy: networkpolicies.networking.k8s.io is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "networkpolicies" in API group "networking.k8s.io" at the cluster scope
	E0816 18:36:31.037901       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.NetworkPolicy: failed to list *v1.NetworkPolicy: networkpolicies.networking.k8s.io is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "networkpolicies" in API group "networking.k8s.io" at the cluster scope
	I0816 18:36:40.819723       1 main.go:295] Handling node with IPs: map[192.168.85.2:{}]
	I0816 18:36:40.819772       1 main.go:299] handling current node
	W0816 18:36:47.880071       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "namespaces" in API group "" at the cluster scope
	E0816 18:36:47.880106       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "namespaces" in API group "" at the cluster scope
	I0816 18:36:50.819230       1 main.go:295] Handling node with IPs: map[192.168.85.2:{}]
	I0816 18:36:50.819355       1 main.go:299] handling current node
	W0816 18:36:57.119542       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: failed to list *v1.Pod: pods is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "pods" in API group "" at the cluster scope
	E0816 18:36:57.119576       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "pods" in API group "" at the cluster scope
	I0816 18:37:00.819627       1 main.go:295] Handling node with IPs: map[192.168.85.2:{}]
	I0816 18:37:00.819662       1 main.go:299] handling current node
	
	
	==> kube-apiserver [447b982d5e55ff3f7c2e0daf7eb85d57cc3778df20255ada1e82db270bb3c0ef] <==
	I0816 18:28:31.561875       1 controller.go:132] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
	I0816 18:28:31.561906       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0816 18:28:31.567797       1 storage_scheduling.go:132] created PriorityClass system-node-critical with value 2000001000
	I0816 18:28:31.571597       1 storage_scheduling.go:132] created PriorityClass system-cluster-critical with value 2000000000
	I0816 18:28:31.571620       1 storage_scheduling.go:148] all system priority classes are created successfully or already exist.
	I0816 18:28:32.127336       1 controller.go:606] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0816 18:28:32.169542       1 controller.go:606] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	W0816 18:28:32.282294       1 lease.go:233] Resetting endpoints for master service "kubernetes" to [192.168.85.2]
	I0816 18:28:32.283368       1 controller.go:606] quota admission added evaluator for: endpoints
	I0816 18:28:32.291956       1 controller.go:606] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0816 18:28:33.209341       1 controller.go:606] quota admission added evaluator for: serviceaccounts
	I0816 18:28:34.034103       1 controller.go:606] quota admission added evaluator for: deployments.apps
	I0816 18:28:34.119590       1 controller.go:606] quota admission added evaluator for: daemonsets.apps
	I0816 18:28:42.527796       1 controller.go:606] quota admission added evaluator for: leases.coordination.k8s.io
	I0816 18:28:49.176660       1 controller.go:606] quota admission added evaluator for: controllerrevisions.apps
	I0816 18:28:49.255824       1 controller.go:606] quota admission added evaluator for: replicasets.apps
	I0816 18:28:59.952796       1 client.go:360] parsed scheme: "passthrough"
	I0816 18:28:59.952841       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0816 18:28:59.952851       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0816 18:29:38.882327       1 client.go:360] parsed scheme: "passthrough"
	I0816 18:29:38.882371       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0816 18:29:38.882388       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0816 18:30:17.159611       1 client.go:360] parsed scheme: "passthrough"
	I0816 18:30:17.159674       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0816 18:30:17.159683       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	
	
	==> kube-apiserver [c9fb62cfb5636d23edd12742b0d3803abb87a3cecdf490382bebadcb3f66804d] <==
	I0816 18:33:43.454599       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0816 18:33:43.454610       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0816 18:34:19.497657       1 client.go:360] parsed scheme: "passthrough"
	I0816 18:34:19.497703       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0816 18:34:19.497711       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	W0816 18:34:20.990711       1 handler_proxy.go:102] no RequestInfo found in the context
	E0816 18:34:20.990817       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0816 18:34:20.990834       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0816 18:34:58.580600       1 client.go:360] parsed scheme: "passthrough"
	I0816 18:34:58.580645       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0816 18:34:58.580675       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0816 18:35:40.080325       1 client.go:360] parsed scheme: "passthrough"
	I0816 18:35:40.080369       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0816 18:35:40.080559       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	W0816 18:36:17.317207       1 handler_proxy.go:102] no RequestInfo found in the context
	E0816 18:36:17.317451       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0816 18:36:17.317578       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0816 18:36:21.171553       1 client.go:360] parsed scheme: "passthrough"
	I0816 18:36:21.171597       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0816 18:36:21.171629       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0816 18:36:51.245242       1 client.go:360] parsed scheme: "passthrough"
	I0816 18:36:51.245289       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0816 18:36:51.245298       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	
	
	==> kube-controller-manager [6dc4fb429749dff987e078bd740c3d8ac34c76edbcf81d202b385019f44254e9] <==
	I0816 18:32:41.650496       1 request.go:655] Throttling request took 1.048505274s, request: GET:https://192.168.85.2:8443/apis/authorization.k8s.io/v1?timeout=32s
	W0816 18:32:42.502572       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0816 18:33:08.538404       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0816 18:33:14.153204       1 request.go:655] Throttling request took 1.048340963s, request: GET:https://192.168.85.2:8443/apis/extensions/v1beta1?timeout=32s
	W0816 18:33:15.005891       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0816 18:33:39.040353       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0816 18:33:46.673099       1 request.go:655] Throttling request took 1.048360569s, request: GET:https://192.168.85.2:8443/apis/extensions/v1beta1?timeout=32s
	W0816 18:33:47.524557       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0816 18:34:09.542177       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0816 18:34:19.175075       1 request.go:655] Throttling request took 1.048263665s, request: GET:https://192.168.85.2:8443/apis/extensions/v1beta1?timeout=32s
	W0816 18:34:20.027637       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0816 18:34:40.044180       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0816 18:34:51.678094       1 request.go:655] Throttling request took 1.048042233s, request: GET:https://192.168.85.2:8443/apis/scheduling.k8s.io/v1beta1?timeout=32s
	W0816 18:34:52.529609       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0816 18:35:10.546113       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0816 18:35:24.179873       1 request.go:655] Throttling request took 1.048434323s, request: GET:https://192.168.85.2:8443/apis/authorization.k8s.io/v1?timeout=32s
	W0816 18:35:25.031642       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0816 18:35:41.048337       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0816 18:35:56.682093       1 request.go:655] Throttling request took 1.048397901s, request: GET:https://192.168.85.2:8443/apis/scheduling.k8s.io/v1?timeout=32s
	W0816 18:35:57.533508       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0816 18:36:11.550383       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0816 18:36:29.184077       1 request.go:655] Throttling request took 1.048198684s, request: GET:https://192.168.85.2:8443/apis/extensions/v1beta1?timeout=32s
	W0816 18:36:30.038519       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0816 18:36:42.052257       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0816 18:37:01.688911       1 request.go:655] Throttling request took 1.046828385s, request: GET:https://192.168.85.2:8443/apis/extensions/v1beta1?timeout=32s
	
	
	==> kube-controller-manager [78be83580cf2485e2d5a7a049e4681acf055c50835844bb627f268dc3bd5943b] <==
	I0816 18:28:49.355237       1 shared_informer.go:247] Caches are synced for certificate-csrsigning-kubelet-serving 
	I0816 18:28:49.369929       1 shared_informer.go:247] Caches are synced for certificate-csrapproving 
	I0816 18:28:49.371128       1 shared_informer.go:247] Caches are synced for certificate-csrsigning-legacy-unknown 
	E0816 18:28:49.373782       1 daemon_controller.go:320] kube-system/kube-proxy failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kube-proxy", GenerateName:"", Namespace:"kube-system", SelfLink:"", UID:"17b552be-7e1b-4990-a68a-8559d5412fb8", ResourceVersion:"372", Generation:1, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63859429714, loc:(*time.Location)(0x632eb80)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string{"deprecated.daemonset.template.generation":"1"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kubeadm", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0x4001c1b440), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0x4001c1b460)}, v1.ManagedFieldsEntry{Manager:"kube-co
ntroller-manager", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0x4001c1b480), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0x4001c1b4a0)}}}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0x4001c1b4c0), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"kube-proxy", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElastic
BlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(0x4001c26940), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSour
ce)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"xtables-lock", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0x4001c1b4e0), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSo
urce)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"lib-modules", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0x4001c1b500), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil),
Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"kube-proxy", Image:"k8s.gcr.io/kube-proxy:v1.20.0", Command:[]string{"/usr/local/bin/kube-proxy", "--config=/var/lib/kube-proxy/config.conf", "--hostname-override=$(NODE_NAME)"}, Args:[]string(nil),
WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"NODE_NAME", Value:"", ValueFrom:(*v1.EnvVarSource)(0x4001c1b540)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"kube-proxy", ReadOnly:false, MountPath:"/var/lib/kube-proxy", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"xtables-lock", ReadOnly:false, MountPath:"/run/xtables.lock", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"lib-modules", ReadOnly:true, MountPath:"/lib/modules", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"F
ile", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(0x4001c3c8a0), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0x4001c2cac8), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string{"kubernetes.io/os":"linux"}, ServiceAccountName:"kube-proxy", DeprecatedServiceAccount:"kube-proxy", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0x40008f0930), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"CriticalAddonsOnly", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)(nil)}, v1.Toleration{Key:"", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)
(nil)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"system-node-critical", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:"RollingUpdate", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0x4000737498)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0x4001c2cb18)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:0, NumberMisscheduled:0, DesiredNumberScheduled:1, NumberReady:0, ObservedGeneration:1, UpdatedNumberScheduled:0, NumberAvailable:0, NumberUnavailable:1, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps "kube-proxy": the object has been modified; please apply your changes to the latest ve
rsion and try again
	I0816 18:28:49.375593       1 event.go:291] "Event occurred" object="kube-system/coredns-74ff55c5b" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-74ff55c5b-vdt9d"
	I0816 18:28:49.391933       1 shared_informer.go:247] Caches are synced for attach detach 
	I0816 18:28:49.444846       1 shared_informer.go:247] Caches are synced for resource quota 
	I0816 18:28:49.476337       1 shared_informer.go:247] Caches are synced for ReplicationController 
	I0816 18:28:49.490411       1 shared_informer.go:247] Caches are synced for disruption 
	I0816 18:28:49.490439       1 disruption.go:339] Sending events to api server.
	I0816 18:28:49.493833       1 shared_informer.go:247] Caches are synced for resource quota 
	I0816 18:28:49.582434       1 shared_informer.go:240] Waiting for caches to sync for garbage collector
	I0816 18:28:49.882603       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0816 18:28:49.890890       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0816 18:28:49.890912       1 garbagecollector.go:151] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0816 18:28:51.030410       1 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-74ff55c5b to 1"
	I0816 18:28:51.060620       1 event.go:291] "Event occurred" object="kube-system/coredns-74ff55c5b" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-74ff55c5b-bkx7j"
	I0816 18:28:54.193768       1 node_lifecycle_controller.go:1222] Controller detected that some Nodes are Ready. Exiting master disruption mode.
	I0816 18:30:33.595244       1 event.go:291] "Event occurred" object="kube-system/metrics-server" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set metrics-server-9975d5f86 to 1"
	I0816 18:30:33.629789       1 event.go:291] "Event occurred" object="kube-system/metrics-server-9975d5f86" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"metrics-server-9975d5f86-\" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount \"metrics-server\" not found"
	E0816 18:30:33.645737       1 replica_set.go:532] sync "kube-system/metrics-server-9975d5f86" failed with pods "metrics-server-9975d5f86-" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount "metrics-server" not found
	E0816 18:30:33.668704       1 replica_set.go:532] sync "kube-system/metrics-server-9975d5f86" failed with pods "metrics-server-9975d5f86-" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount "metrics-server" not found
	I0816 18:30:33.669728       1 event.go:291] "Event occurred" object="kube-system/metrics-server-9975d5f86" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"metrics-server-9975d5f86-\" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount \"metrics-server\" not found"
	E0816 18:30:33.778438       1 clusterroleaggregation_controller.go:181] admin failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "admin": the object has been modified; please apply your changes to the latest version and try again
	E0816 18:30:33.805896       1 clusterroleaggregation_controller.go:181] admin failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "admin": the object has been modified; please apply your changes to the latest version and try again
	
	
	==> kube-proxy [70e77ee9f2cac8189460718220ff38a2c7e68efb6c2b3f9e6e0eed05f1567513] <==
	I0816 18:31:20.552643       1 node.go:172] Successfully retrieved node IP: 192.168.85.2
	I0816 18:31:20.552717       1 server_others.go:142] kube-proxy node IP is an IPv4 address (192.168.85.2), assume IPv4 operation
	W0816 18:31:20.660683       1 server_others.go:578] Unknown proxy mode "", assuming iptables proxy
	I0816 18:31:20.660803       1 server_others.go:185] Using iptables Proxier.
	I0816 18:31:20.661580       1 server.go:650] Version: v1.20.0
	I0816 18:31:20.662378       1 config.go:315] Starting service config controller
	I0816 18:31:20.662395       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0816 18:31:20.662413       1 config.go:224] Starting endpoint slice config controller
	I0816 18:31:20.662417       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	I0816 18:31:20.762528       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	I0816 18:31:20.762570       1 shared_informer.go:247] Caches are synced for service config 
	
	
	==> kube-proxy [c8808a95b766ea5da16f28d106fac8ed477a20a335d64d6f24a0d16441629040] <==
	I0816 18:28:50.205413       1 node.go:172] Successfully retrieved node IP: 192.168.85.2
	I0816 18:28:50.205534       1 server_others.go:142] kube-proxy node IP is an IPv4 address (192.168.85.2), assume IPv4 operation
	W0816 18:28:50.230421       1 server_others.go:578] Unknown proxy mode "", assuming iptables proxy
	I0816 18:28:50.237205       1 server_others.go:185] Using iptables Proxier.
	I0816 18:28:50.237454       1 server.go:650] Version: v1.20.0
	I0816 18:28:50.237991       1 config.go:315] Starting service config controller
	I0816 18:28:50.238005       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0816 18:28:50.243952       1 config.go:224] Starting endpoint slice config controller
	I0816 18:28:50.243970       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	I0816 18:28:50.338104       1 shared_informer.go:247] Caches are synced for service config 
	I0816 18:28:50.344129       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	
	
	==> kube-scheduler [67a319b21b20b2c80ae76382057598c268c780a5584c0a6774faabede5895141] <==
	I0816 18:31:09.019705       1 serving.go:331] Generated self-signed cert in-memory
	W0816 18:31:16.144400       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0816 18:31:16.144585       1 authentication.go:332] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0816 18:31:16.144639       1 authentication.go:333] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0816 18:31:16.144673       1 authentication.go:334] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0816 18:31:16.577782       1 secure_serving.go:197] Serving securely on 127.0.0.1:10259
	I0816 18:31:16.577912       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0816 18:31:16.577919       1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0816 18:31:16.577934       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	I0816 18:31:16.889138       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	
	==> kube-scheduler [b5415dfeafde650399f65c8eaa6780576175b5a6a7f3c05fe903fc0ab1c7752b] <==
	W0816 18:28:30.723190       1 authentication.go:333] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0816 18:28:30.723240       1 authentication.go:334] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0816 18:28:30.786278       1 secure_serving.go:197] Serving securely on 127.0.0.1:10259
	I0816 18:28:30.786569       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0816 18:28:30.789042       1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0816 18:28:30.789152       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	E0816 18:28:30.824202       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0816 18:28:30.824400       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0816 18:28:30.824536       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0816 18:28:30.824661       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0816 18:28:30.824825       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0816 18:28:30.825161       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0816 18:28:30.825420       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0816 18:28:30.825562       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0816 18:28:30.825694       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0816 18:28:30.825824       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0816 18:28:30.825876       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0816 18:28:30.844436       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0816 18:28:31.689489       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0816 18:28:31.716286       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0816 18:28:31.745748       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0816 18:28:31.793233       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0816 18:28:31.802235       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0816 18:28:31.840176       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0816 18:28:34.589290       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	
	==> kubelet <==
	Aug 16 18:35:04 old-k8s-version-686713 kubelet[661]: E0816 18:35:04.950445     661 pod_workers.go:191] Error syncing pod 46ffaa3d-9364-4297-9676-956836cedbf2 ("metrics-server-9975d5f86-nj5gr_kube-system(46ffaa3d-9364-4297-9676-956836cedbf2)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Aug 16 18:35:15 old-k8s-version-686713 kubelet[661]: I0816 18:35:15.950309     661 scope.go:95] [topologymanager] RemoveContainer - Container ID: 56abb184690fa79e81480b282a4404f86c94be80ab074e8ef6880064f8e40d9b
	Aug 16 18:35:15 old-k8s-version-686713 kubelet[661]: E0816 18:35:15.950603     661 pod_workers.go:191] Error syncing pod 7889e018-f267-47f4-beb4-d17b5643fc87 ("dashboard-metrics-scraper-8d5bb5db8-tkzd5_kubernetes-dashboard(7889e018-f267-47f4-beb4-d17b5643fc87)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-tkzd5_kubernetes-dashboard(7889e018-f267-47f4-beb4-d17b5643fc87)"
	Aug 16 18:35:19 old-k8s-version-686713 kubelet[661]: E0816 18:35:19.955737     661 pod_workers.go:191] Error syncing pod 46ffaa3d-9364-4297-9676-956836cedbf2 ("metrics-server-9975d5f86-nj5gr_kube-system(46ffaa3d-9364-4297-9676-956836cedbf2)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Aug 16 18:35:28 old-k8s-version-686713 kubelet[661]: I0816 18:35:28.949846     661 scope.go:95] [topologymanager] RemoveContainer - Container ID: 56abb184690fa79e81480b282a4404f86c94be80ab074e8ef6880064f8e40d9b
	Aug 16 18:35:28 old-k8s-version-686713 kubelet[661]: E0816 18:35:28.950646     661 pod_workers.go:191] Error syncing pod 7889e018-f267-47f4-beb4-d17b5643fc87 ("dashboard-metrics-scraper-8d5bb5db8-tkzd5_kubernetes-dashboard(7889e018-f267-47f4-beb4-d17b5643fc87)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-tkzd5_kubernetes-dashboard(7889e018-f267-47f4-beb4-d17b5643fc87)"
	Aug 16 18:35:31 old-k8s-version-686713 kubelet[661]: E0816 18:35:31.950471     661 pod_workers.go:191] Error syncing pod 46ffaa3d-9364-4297-9676-956836cedbf2 ("metrics-server-9975d5f86-nj5gr_kube-system(46ffaa3d-9364-4297-9676-956836cedbf2)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Aug 16 18:35:43 old-k8s-version-686713 kubelet[661]: I0816 18:35:43.953699     661 scope.go:95] [topologymanager] RemoveContainer - Container ID: 56abb184690fa79e81480b282a4404f86c94be80ab074e8ef6880064f8e40d9b
	Aug 16 18:35:43 old-k8s-version-686713 kubelet[661]: E0816 18:35:43.955091     661 pod_workers.go:191] Error syncing pod 7889e018-f267-47f4-beb4-d17b5643fc87 ("dashboard-metrics-scraper-8d5bb5db8-tkzd5_kubernetes-dashboard(7889e018-f267-47f4-beb4-d17b5643fc87)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-tkzd5_kubernetes-dashboard(7889e018-f267-47f4-beb4-d17b5643fc87)"
	Aug 16 18:35:45 old-k8s-version-686713 kubelet[661]: E0816 18:35:45.950556     661 pod_workers.go:191] Error syncing pod 46ffaa3d-9364-4297-9676-956836cedbf2 ("metrics-server-9975d5f86-nj5gr_kube-system(46ffaa3d-9364-4297-9676-956836cedbf2)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Aug 16 18:35:55 old-k8s-version-686713 kubelet[661]: I0816 18:35:55.949895     661 scope.go:95] [topologymanager] RemoveContainer - Container ID: 56abb184690fa79e81480b282a4404f86c94be80ab074e8ef6880064f8e40d9b
	Aug 16 18:35:55 old-k8s-version-686713 kubelet[661]: E0816 18:35:55.950256     661 pod_workers.go:191] Error syncing pod 7889e018-f267-47f4-beb4-d17b5643fc87 ("dashboard-metrics-scraper-8d5bb5db8-tkzd5_kubernetes-dashboard(7889e018-f267-47f4-beb4-d17b5643fc87)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-tkzd5_kubernetes-dashboard(7889e018-f267-47f4-beb4-d17b5643fc87)"
	Aug 16 18:36:00 old-k8s-version-686713 kubelet[661]: E0816 18:36:00.951188     661 pod_workers.go:191] Error syncing pod 46ffaa3d-9364-4297-9676-956836cedbf2 ("metrics-server-9975d5f86-nj5gr_kube-system(46ffaa3d-9364-4297-9676-956836cedbf2)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Aug 16 18:36:07 old-k8s-version-686713 kubelet[661]: I0816 18:36:07.950150     661 scope.go:95] [topologymanager] RemoveContainer - Container ID: 56abb184690fa79e81480b282a4404f86c94be80ab074e8ef6880064f8e40d9b
	Aug 16 18:36:07 old-k8s-version-686713 kubelet[661]: E0816 18:36:07.951260     661 pod_workers.go:191] Error syncing pod 7889e018-f267-47f4-beb4-d17b5643fc87 ("dashboard-metrics-scraper-8d5bb5db8-tkzd5_kubernetes-dashboard(7889e018-f267-47f4-beb4-d17b5643fc87)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-tkzd5_kubernetes-dashboard(7889e018-f267-47f4-beb4-d17b5643fc87)"
	Aug 16 18:36:13 old-k8s-version-686713 kubelet[661]: E0816 18:36:13.954479     661 pod_workers.go:191] Error syncing pod 46ffaa3d-9364-4297-9676-956836cedbf2 ("metrics-server-9975d5f86-nj5gr_kube-system(46ffaa3d-9364-4297-9676-956836cedbf2)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Aug 16 18:36:22 old-k8s-version-686713 kubelet[661]: I0816 18:36:22.949802     661 scope.go:95] [topologymanager] RemoveContainer - Container ID: 56abb184690fa79e81480b282a4404f86c94be80ab074e8ef6880064f8e40d9b
	Aug 16 18:36:22 old-k8s-version-686713 kubelet[661]: E0816 18:36:22.950180     661 pod_workers.go:191] Error syncing pod 7889e018-f267-47f4-beb4-d17b5643fc87 ("dashboard-metrics-scraper-8d5bb5db8-tkzd5_kubernetes-dashboard(7889e018-f267-47f4-beb4-d17b5643fc87)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-tkzd5_kubernetes-dashboard(7889e018-f267-47f4-beb4-d17b5643fc87)"
	Aug 16 18:36:28 old-k8s-version-686713 kubelet[661]: E0816 18:36:28.950473     661 pod_workers.go:191] Error syncing pod 46ffaa3d-9364-4297-9676-956836cedbf2 ("metrics-server-9975d5f86-nj5gr_kube-system(46ffaa3d-9364-4297-9676-956836cedbf2)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Aug 16 18:36:35 old-k8s-version-686713 kubelet[661]: I0816 18:36:35.957465     661 scope.go:95] [topologymanager] RemoveContainer - Container ID: 56abb184690fa79e81480b282a4404f86c94be80ab074e8ef6880064f8e40d9b
	Aug 16 18:36:35 old-k8s-version-686713 kubelet[661]: E0816 18:36:35.958240     661 pod_workers.go:191] Error syncing pod 7889e018-f267-47f4-beb4-d17b5643fc87 ("dashboard-metrics-scraper-8d5bb5db8-tkzd5_kubernetes-dashboard(7889e018-f267-47f4-beb4-d17b5643fc87)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-tkzd5_kubernetes-dashboard(7889e018-f267-47f4-beb4-d17b5643fc87)"
	Aug 16 18:36:39 old-k8s-version-686713 kubelet[661]: E0816 18:36:39.953696     661 pod_workers.go:191] Error syncing pod 46ffaa3d-9364-4297-9676-956836cedbf2 ("metrics-server-9975d5f86-nj5gr_kube-system(46ffaa3d-9364-4297-9676-956836cedbf2)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Aug 16 18:36:47 old-k8s-version-686713 kubelet[661]: I0816 18:36:47.949916     661 scope.go:95] [topologymanager] RemoveContainer - Container ID: 56abb184690fa79e81480b282a4404f86c94be80ab074e8ef6880064f8e40d9b
	Aug 16 18:36:47 old-k8s-version-686713 kubelet[661]: E0816 18:36:47.950244     661 pod_workers.go:191] Error syncing pod 7889e018-f267-47f4-beb4-d17b5643fc87 ("dashboard-metrics-scraper-8d5bb5db8-tkzd5_kubernetes-dashboard(7889e018-f267-47f4-beb4-d17b5643fc87)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-tkzd5_kubernetes-dashboard(7889e018-f267-47f4-beb4-d17b5643fc87)"
	Aug 16 18:36:51 old-k8s-version-686713 kubelet[661]: E0816 18:36:51.950755     661 pod_workers.go:191] Error syncing pod 46ffaa3d-9364-4297-9676-956836cedbf2 ("metrics-server-9975d5f86-nj5gr_kube-system(46ffaa3d-9364-4297-9676-956836cedbf2)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	
	
	==> kubernetes-dashboard [3946f4bb419f390836cae4a5d5344626f7d8e29a3e27fdeadd245e6bebad7503] <==
	2024/08/16 18:31:44 Starting overwatch
	2024/08/16 18:31:44 Using namespace: kubernetes-dashboard
	2024/08/16 18:31:44 Using in-cluster config to connect to apiserver
	2024/08/16 18:31:44 Using secret token for csrf signing
	2024/08/16 18:31:44 Initializing csrf token from kubernetes-dashboard-csrf secret
	2024/08/16 18:31:44 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2024/08/16 18:31:44 Successful initial request to the apiserver, version: v1.20.0
	2024/08/16 18:31:44 Generating JWE encryption key
	2024/08/16 18:31:44 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2024/08/16 18:31:44 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2024/08/16 18:31:45 Initializing JWE encryption key from synchronized object
	2024/08/16 18:31:45 Creating in-cluster Sidecar client
	2024/08/16 18:31:45 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/08/16 18:31:45 Serving insecurely on HTTP port: 9090
	2024/08/16 18:32:15 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/08/16 18:32:45 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/08/16 18:33:15 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/08/16 18:33:45 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/08/16 18:34:15 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/08/16 18:34:45 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/08/16 18:35:15 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/08/16 18:35:45 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/08/16 18:36:15 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/08/16 18:36:45 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [761eeb094020ad669cd5e4d3c642fc27e22a7c5c1ca7e43cf3f2bc72577e44d3] <==
	I0816 18:31:20.443621       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0816 18:31:50.449968       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [83541051f882d1bb2230d26bbe44c443d1ed022d11ceafd571de268cfbdca830] <==
	I0816 18:32:02.047619       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0816 18:32:02.064069       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0816 18:32:02.064123       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0816 18:32:19.599591       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0816 18:32:19.602366       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-686713_798d4472-c59b-4723-9911-d1e8ac0c12ff!
	I0816 18:32:19.604132       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"0fc925ca-3e6e-409f-946f-10e329ad8a26", APIVersion:"v1", ResourceVersion:"846", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-686713_798d4472-c59b-4723-9911-d1e8ac0c12ff became leader
	I0816 18:32:19.703527       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-686713_798d4472-c59b-4723-9911-d1e8ac0c12ff!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-686713 -n old-k8s-version-686713
helpers_test.go:261: (dbg) Run:  kubectl --context old-k8s-version-686713 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-9975d5f86-nj5gr
helpers_test.go:274: ======> post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context old-k8s-version-686713 describe pod metrics-server-9975d5f86-nj5gr
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context old-k8s-version-686713 describe pod metrics-server-9975d5f86-nj5gr: exit status 1 (90.629826ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-9975d5f86-nj5gr" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context old-k8s-version-686713 describe pod metrics-server-9975d5f86-nj5gr: exit status 1
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (375.02s)

                                                
                                    

Test pass (298/328)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 11.3
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.08
9 TestDownloadOnly/v1.20.0/DeleteAll 0.21
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.12
12 TestDownloadOnly/v1.31.0/json-events 7.31
13 TestDownloadOnly/v1.31.0/preload-exists 0
17 TestDownloadOnly/v1.31.0/LogsDuration 0.07
18 TestDownloadOnly/v1.31.0/DeleteAll 0.21
19 TestDownloadOnly/v1.31.0/DeleteAlwaysSucceeds 0.13
21 TestBinaryMirror 0.56
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.07
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.07
27 TestAddons/Setup 217.21
31 TestAddons/serial/GCPAuth/Namespaces 0.18
33 TestAddons/parallel/Registry 15.57
34 TestAddons/parallel/Ingress 20.26
35 TestAddons/parallel/InspektorGadget 11.07
36 TestAddons/parallel/MetricsServer 5.78
39 TestAddons/parallel/CSI 48.32
40 TestAddons/parallel/Headlamp 16.36
41 TestAddons/parallel/CloudSpanner 6.7
42 TestAddons/parallel/LocalPath 53.03
43 TestAddons/parallel/NvidiaDevicePlugin 5.67
44 TestAddons/parallel/Yakd 12.12
45 TestAddons/StoppedEnableDisable 12.29
46 TestCertOptions 35.21
47 TestCertExpiration 229.59
49 TestForceSystemdFlag 38.19
50 TestForceSystemdEnv 42.74
51 TestDockerEnvContainerd 43.78
56 TestErrorSpam/setup 32.47
57 TestErrorSpam/start 0.77
58 TestErrorSpam/status 1.08
59 TestErrorSpam/pause 1.76
60 TestErrorSpam/unpause 1.85
61 TestErrorSpam/stop 1.49
64 TestFunctional/serial/CopySyncFile 0
65 TestFunctional/serial/StartWithProxy 46.83
66 TestFunctional/serial/AuditLog 0
67 TestFunctional/serial/SoftStart 6.52
68 TestFunctional/serial/KubeContext 0.07
69 TestFunctional/serial/KubectlGetPods 0.09
72 TestFunctional/serial/CacheCmd/cache/add_remote 4.19
73 TestFunctional/serial/CacheCmd/cache/add_local 1.29
74 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.05
75 TestFunctional/serial/CacheCmd/cache/list 0.06
76 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.3
77 TestFunctional/serial/CacheCmd/cache/cache_reload 2
78 TestFunctional/serial/CacheCmd/cache/delete 0.1
79 TestFunctional/serial/MinikubeKubectlCmd 0.14
80 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.14
81 TestFunctional/serial/ExtraConfig 47.82
82 TestFunctional/serial/ComponentHealth 0.1
83 TestFunctional/serial/LogsCmd 1.65
84 TestFunctional/serial/LogsFileCmd 1.63
85 TestFunctional/serial/InvalidService 4.48
87 TestFunctional/parallel/ConfigCmd 0.43
88 TestFunctional/parallel/DashboardCmd 9.65
89 TestFunctional/parallel/DryRun 0.56
90 TestFunctional/parallel/InternationalLanguage 0.2
91 TestFunctional/parallel/StatusCmd 1.14
95 TestFunctional/parallel/ServiceCmdConnect 10.69
96 TestFunctional/parallel/AddonsCmd 0.19
97 TestFunctional/parallel/PersistentVolumeClaim 25.04
99 TestFunctional/parallel/SSHCmd 0.65
100 TestFunctional/parallel/CpCmd 2.32
102 TestFunctional/parallel/FileSync 0.36
103 TestFunctional/parallel/CertSync 2.09
107 TestFunctional/parallel/NodeLabels 0.11
109 TestFunctional/parallel/NonActiveRuntimeDisabled 0.76
111 TestFunctional/parallel/License 0.29
113 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.61
114 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
116 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 8.4
117 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.1
118 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
122 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
123 TestFunctional/parallel/ServiceCmd/DeployApp 6.24
124 TestFunctional/parallel/ProfileCmd/profile_not_create 0.47
125 TestFunctional/parallel/ServiceCmd/List 0.56
126 TestFunctional/parallel/ProfileCmd/profile_list 0.42
127 TestFunctional/parallel/ServiceCmd/JSONOutput 0.55
128 TestFunctional/parallel/ProfileCmd/profile_json_output 0.48
129 TestFunctional/parallel/ServiceCmd/HTTPS 0.53
130 TestFunctional/parallel/MountCmd/any-port 5.68
131 TestFunctional/parallel/ServiceCmd/Format 0.51
132 TestFunctional/parallel/ServiceCmd/URL 0.45
133 TestFunctional/parallel/MountCmd/specific-port 2.19
134 TestFunctional/parallel/MountCmd/VerifyCleanup 2.85
135 TestFunctional/parallel/Version/short 0.07
136 TestFunctional/parallel/Version/components 1.28
137 TestFunctional/parallel/ImageCommands/ImageListShort 0.26
138 TestFunctional/parallel/ImageCommands/ImageListTable 0.27
139 TestFunctional/parallel/ImageCommands/ImageListJson 0.32
140 TestFunctional/parallel/ImageCommands/ImageListYaml 0.29
141 TestFunctional/parallel/ImageCommands/ImageBuild 2.88
142 TestFunctional/parallel/ImageCommands/Setup 0.66
143 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.34
144 TestFunctional/parallel/UpdateContextCmd/no_changes 0.22
145 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.19
146 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.2
147 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1.33
148 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.5
149 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.39
150 TestFunctional/parallel/ImageCommands/ImageRemove 0.54
151 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.65
152 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.4
153 TestFunctional/delete_echo-server_images 0.04
154 TestFunctional/delete_my-image_image 0.02
155 TestFunctional/delete_minikube_cached_images 0.02
159 TestMultiControlPlane/serial/StartCluster 117.67
160 TestMultiControlPlane/serial/DeployApp 31.25
161 TestMultiControlPlane/serial/PingHostFromPods 1.66
162 TestMultiControlPlane/serial/AddWorkerNode 23.91
163 TestMultiControlPlane/serial/NodeLabels 0.13
164 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.82
165 TestMultiControlPlane/serial/CopyFile 18.76
166 TestMultiControlPlane/serial/StopSecondaryNode 12.87
167 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.57
168 TestMultiControlPlane/serial/RestartSecondaryNode 19.45
169 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.82
170 TestMultiControlPlane/serial/RestartClusterKeepsNodes 134.17
171 TestMultiControlPlane/serial/DeleteSecondaryNode 10.55
172 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.56
173 TestMultiControlPlane/serial/StopCluster 36.09
174 TestMultiControlPlane/serial/RestartCluster 78.68
175 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.58
176 TestMultiControlPlane/serial/AddSecondaryNode 41.29
177 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.74
181 TestJSONOutput/start/Command 52.48
182 TestJSONOutput/start/Audit 0
184 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
185 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
187 TestJSONOutput/pause/Command 0.73
188 TestJSONOutput/pause/Audit 0
190 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
191 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
193 TestJSONOutput/unpause/Command 0.67
194 TestJSONOutput/unpause/Audit 0
196 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
197 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
199 TestJSONOutput/stop/Command 5.78
200 TestJSONOutput/stop/Audit 0
202 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
203 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
204 TestErrorJSONOutput 0.21
206 TestKicCustomNetwork/create_custom_network 44.48
207 TestKicCustomNetwork/use_default_bridge_network 33.79
208 TestKicExistingNetwork 33.41
209 TestKicCustomSubnet 33.09
210 TestKicStaticIP 36.24
211 TestMainNoArgs 0.05
212 TestMinikubeProfile 68.34
215 TestMountStart/serial/StartWithMountFirst 5.98
216 TestMountStart/serial/VerifyMountFirst 0.26
217 TestMountStart/serial/StartWithMountSecond 6.68
218 TestMountStart/serial/VerifyMountSecond 0.27
219 TestMountStart/serial/DeleteFirst 1.61
220 TestMountStart/serial/VerifyMountPostDelete 0.3
221 TestMountStart/serial/Stop 1.19
222 TestMountStart/serial/RestartStopped 7.27
223 TestMountStart/serial/VerifyMountPostStop 0.26
226 TestMultiNode/serial/FreshStart2Nodes 65.23
227 TestMultiNode/serial/DeployApp2Nodes 15.05
228 TestMultiNode/serial/PingHostFrom2Pods 1
229 TestMultiNode/serial/AddNode 18.94
230 TestMultiNode/serial/MultiNodeLabels 0.1
231 TestMultiNode/serial/ProfileList 0.32
232 TestMultiNode/serial/CopyFile 9.96
233 TestMultiNode/serial/StopNode 2.37
234 TestMultiNode/serial/StartAfterStop 9.34
235 TestMultiNode/serial/RestartKeepsNodes 93.24
236 TestMultiNode/serial/DeleteNode 5.52
237 TestMultiNode/serial/StopMultiNode 23.96
238 TestMultiNode/serial/RestartMultiNode 58.03
239 TestMultiNode/serial/ValidateNameConflict 35.51
244 TestPreload 126.68
246 TestScheduledStopUnix 105.63
249 TestInsufficientStorage 12.92
250 TestRunningBinaryUpgrade 91.1
252 TestKubernetesUpgrade 100.47
253 TestMissingContainerUpgrade 174.04
255 TestNoKubernetes/serial/StartNoK8sWithVersion 0.07
256 TestNoKubernetes/serial/StartWithK8s 40.23
257 TestNoKubernetes/serial/StartWithStopK8s 19.88
258 TestNoKubernetes/serial/Start 6.47
259 TestNoKubernetes/serial/VerifyK8sNotRunning 0.31
260 TestNoKubernetes/serial/ProfileList 1.09
261 TestNoKubernetes/serial/Stop 1.23
262 TestNoKubernetes/serial/StartNoArgs 7.64
263 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.33
264 TestStoppedBinaryUpgrade/Setup 0.74
265 TestStoppedBinaryUpgrade/Upgrade 121.28
274 TestPause/serial/Start 74.48
275 TestStoppedBinaryUpgrade/MinikubeLogs 1.16
283 TestNetworkPlugins/group/false 3.9
284 TestPause/serial/SecondStartNoReconfiguration 7.27
288 TestPause/serial/Pause 0.93
289 TestPause/serial/VerifyStatus 0.41
290 TestPause/serial/Unpause 0.97
291 TestPause/serial/PauseAgain 1.02
292 TestPause/serial/DeletePaused 3.1
293 TestPause/serial/VerifyDeletedResources 0.3
295 TestStartStop/group/old-k8s-version/serial/FirstStart 155.62
296 TestStartStop/group/old-k8s-version/serial/DeployApp 8.02
298 TestStartStop/group/no-preload/serial/FirstStart 75.7
299 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.74
300 TestStartStop/group/old-k8s-version/serial/Stop 13.45
301 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.18
303 TestStartStop/group/no-preload/serial/DeployApp 10.46
304 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.19
305 TestStartStop/group/no-preload/serial/Stop 12.12
306 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.18
307 TestStartStop/group/no-preload/serial/SecondStart 267.17
308 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6.01
309 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.13
310 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.34
311 TestStartStop/group/no-preload/serial/Pause 3.29
313 TestStartStop/group/embed-certs/serial/FirstStart 66.54
314 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6.01
315 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 6.12
316 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.38
317 TestStartStop/group/old-k8s-version/serial/Pause 3.8
319 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 52.88
320 TestStartStop/group/embed-certs/serial/DeployApp 9.45
321 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.09
322 TestStartStop/group/embed-certs/serial/Stop 12.04
323 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 9.37
324 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.21
325 TestStartStop/group/embed-certs/serial/SecondStart 298.61
326 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.04
327 TestStartStop/group/default-k8s-diff-port/serial/Stop 12.23
328 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.31
329 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 269.71
330 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6.01
331 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.11
332 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.23
333 TestStartStop/group/default-k8s-diff-port/serial/Pause 3.27
334 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6.01
336 TestStartStop/group/newest-cni/serial/FirstStart 50.05
337 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.15
338 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.29
339 TestStartStop/group/embed-certs/serial/Pause 3.14
340 TestNetworkPlugins/group/auto/Start 66.66
341 TestStartStop/group/newest-cni/serial/DeployApp 0
342 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 2.02
343 TestStartStop/group/newest-cni/serial/Stop 1.37
344 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.35
345 TestStartStop/group/newest-cni/serial/SecondStart 17.1
346 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
347 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
348 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.31
349 TestStartStop/group/newest-cni/serial/Pause 3.02
350 TestNetworkPlugins/group/flannel/Start 61.12
351 TestNetworkPlugins/group/auto/KubeletFlags 0.35
352 TestNetworkPlugins/group/auto/NetCatPod 9.4
353 TestNetworkPlugins/group/auto/DNS 0.22
354 TestNetworkPlugins/group/auto/Localhost 0.23
355 TestNetworkPlugins/group/auto/HairPin 0.16
356 TestNetworkPlugins/group/calico/Start 64.66
357 TestNetworkPlugins/group/flannel/ControllerPod 6.01
358 TestNetworkPlugins/group/flannel/KubeletFlags 0.39
359 TestNetworkPlugins/group/flannel/NetCatPod 9.46
360 TestNetworkPlugins/group/flannel/DNS 0.3
361 TestNetworkPlugins/group/flannel/Localhost 0.44
362 TestNetworkPlugins/group/flannel/HairPin 0.25
363 TestNetworkPlugins/group/custom-flannel/Start 58.11
364 TestNetworkPlugins/group/calico/ControllerPod 6.01
365 TestNetworkPlugins/group/calico/KubeletFlags 0.28
366 TestNetworkPlugins/group/calico/NetCatPod 10.25
367 TestNetworkPlugins/group/calico/DNS 0.33
368 TestNetworkPlugins/group/calico/Localhost 0.25
369 TestNetworkPlugins/group/calico/HairPin 0.21
370 TestNetworkPlugins/group/kindnet/Start 60.25
371 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.36
372 TestNetworkPlugins/group/custom-flannel/NetCatPod 9.33
373 TestNetworkPlugins/group/custom-flannel/DNS 0.29
374 TestNetworkPlugins/group/custom-flannel/Localhost 0.28
375 TestNetworkPlugins/group/custom-flannel/HairPin 0.26
376 TestNetworkPlugins/group/bridge/Start 77.71
377 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
378 TestNetworkPlugins/group/kindnet/KubeletFlags 0.37
379 TestNetworkPlugins/group/kindnet/NetCatPod 9.37
380 TestNetworkPlugins/group/kindnet/DNS 0.2
381 TestNetworkPlugins/group/kindnet/Localhost 0.18
382 TestNetworkPlugins/group/kindnet/HairPin 0.19
383 TestNetworkPlugins/group/enable-default-cni/Start 39.13
384 TestNetworkPlugins/group/bridge/KubeletFlags 0.41
385 TestNetworkPlugins/group/bridge/NetCatPod 11.45
386 TestNetworkPlugins/group/bridge/DNS 0.18
387 TestNetworkPlugins/group/bridge/Localhost 0.15
388 TestNetworkPlugins/group/bridge/HairPin 0.2
389 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.39
390 TestNetworkPlugins/group/enable-default-cni/NetCatPod 10.29
391 TestNetworkPlugins/group/enable-default-cni/DNS 26.43
392 TestNetworkPlugins/group/enable-default-cni/Localhost 0.15
393 TestNetworkPlugins/group/enable-default-cni/HairPin 0.15
x
+
TestDownloadOnly/v1.20.0/json-events (11.3s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-778826 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-778826 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (11.303908592s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (11.30s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-778826
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-778826: exit status 85 (76.082495ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-778826 | jenkins | v1.33.1 | 16 Aug 24 17:42 UTC |          |
	|         | -p download-only-778826        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/16 17:42:15
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.22.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0816 17:42:15.663386  293376 out.go:345] Setting OutFile to fd 1 ...
	I0816 17:42:15.663520  293376 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 17:42:15.663532  293376 out.go:358] Setting ErrFile to fd 2...
	I0816 17:42:15.663536  293376 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 17:42:15.663780  293376 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19461-287979/.minikube/bin
	W0816 17:42:15.663920  293376 root.go:314] Error reading config file at /home/jenkins/minikube-integration/19461-287979/.minikube/config/config.json: open /home/jenkins/minikube-integration/19461-287979/.minikube/config/config.json: no such file or directory
	I0816 17:42:15.664311  293376 out.go:352] Setting JSON to true
	I0816 17:42:15.665219  293376 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":5065,"bootTime":1723825070,"procs":175,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0816 17:42:15.665291  293376 start.go:139] virtualization:  
	I0816 17:42:15.668222  293376 out.go:97] [download-only-778826] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	W0816 17:42:15.668351  293376 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/19461-287979/.minikube/cache/preloaded-tarball: no such file or directory
	I0816 17:42:15.668386  293376 notify.go:220] Checking for updates...
	I0816 17:42:15.670491  293376 out.go:169] MINIKUBE_LOCATION=19461
	I0816 17:42:15.672964  293376 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0816 17:42:15.675419  293376 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19461-287979/kubeconfig
	I0816 17:42:15.677751  293376 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19461-287979/.minikube
	I0816 17:42:15.679783  293376 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W0816 17:42:15.683652  293376 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0816 17:42:15.683890  293376 driver.go:392] Setting default libvirt URI to qemu:///system
	I0816 17:42:15.710348  293376 docker.go:123] docker version: linux-27.1.2:Docker Engine - Community
	I0816 17:42:15.710462  293376 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0816 17:42:15.759502  293376 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:52 SystemTime:2024-08-16 17:42:15.750357451 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1067-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214900736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8fc6bcff51318944179630522a095cc9dbf9f353 Expected:8fc6bcff51318944179630522a095cc9dbf9f353} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1]] Warnings:<nil>}}
	I0816 17:42:15.759626  293376 docker.go:307] overlay module found
	I0816 17:42:15.761581  293376 out.go:97] Using the docker driver based on user configuration
	I0816 17:42:15.761605  293376 start.go:297] selected driver: docker
	I0816 17:42:15.761611  293376 start.go:901] validating driver "docker" against <nil>
	I0816 17:42:15.761725  293376 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0816 17:42:15.811355  293376 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:52 SystemTime:2024-08-16 17:42:15.802629001 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1067-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214900736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8fc6bcff51318944179630522a095cc9dbf9f353 Expected:8fc6bcff51318944179630522a095cc9dbf9f353} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1]] Warnings:<nil>}}
	I0816 17:42:15.811530  293376 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0816 17:42:15.811815  293376 start_flags.go:393] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I0816 17:42:15.811976  293376 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0816 17:42:15.814343  293376 out.go:169] Using Docker driver with root privileges
	I0816 17:42:15.816357  293376 cni.go:84] Creating CNI manager for ""
	I0816 17:42:15.816382  293376 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0816 17:42:15.816394  293376 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0816 17:42:15.816485  293376 start.go:340] cluster config:
	{Name:download-only-778826 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-778826 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0816 17:42:15.818732  293376 out.go:97] Starting "download-only-778826" primary control-plane node in "download-only-778826" cluster
	I0816 17:42:15.818755  293376 cache.go:121] Beginning downloading kic base image for docker with containerd
	I0816 17:42:15.820921  293376 out.go:97] Pulling base image v0.0.44-1723740748-19452 ...
	I0816 17:42:15.820950  293376 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime containerd
	I0816 17:42:15.821106  293376 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d in local docker daemon
	I0816 17:42:15.836598  293376 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d to local cache
	I0816 17:42:15.837529  293376 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d in local cache directory
	I0816 17:42:15.837640  293376 image.go:148] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d to local cache
	I0816 17:42:15.901964  293376 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4
	I0816 17:42:15.901989  293376 cache.go:56] Caching tarball of preloaded images
	I0816 17:42:15.903118  293376 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime containerd
	I0816 17:42:15.905144  293376 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0816 17:42:15.905165  293376 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4 ...
	I0816 17:42:15.988915  293376 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4?checksum=md5:7e3d48ccb9f143791669d02e14ce1643 -> /home/jenkins/minikube-integration/19461-287979/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4
	I0816 17:42:19.663669  293376 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d as a tarball
	I0816 17:42:22.823099  293376 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4 ...
	I0816 17:42:22.823208  293376 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/19461-287979/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4 ...
	I0816 17:42:23.955234  293376 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on containerd
	I0816 17:42:23.955627  293376 profile.go:143] Saving config to /home/jenkins/minikube-integration/19461-287979/.minikube/profiles/download-only-778826/config.json ...
	I0816 17:42:23.955661  293376 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19461-287979/.minikube/profiles/download-only-778826/config.json: {Name:mk1eb7b3fa3613d63a9f6c9237ae81669d3fc860 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 17:42:23.955841  293376 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime containerd
	I0816 17:42:23.956056  293376 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/linux/arm64/kubectl.sha256 -> /home/jenkins/minikube-integration/19461-287979/.minikube/cache/linux/arm64/v1.20.0/kubectl
	
	
	* The control-plane node download-only-778826 host does not exist
	  To start a cluster, run: "minikube start -p download-only-778826"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.21s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-778826
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/json-events (7.31s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-528021 --force --alsologtostderr --kubernetes-version=v1.31.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-528021 --force --alsologtostderr --kubernetes-version=v1.31.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (7.30898623s)
--- PASS: TestDownloadOnly/v1.31.0/json-events (7.31s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/preload-exists
--- PASS: TestDownloadOnly/v1.31.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-528021
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-528021: exit status 85 (74.046614ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-778826 | jenkins | v1.33.1 | 16 Aug 24 17:42 UTC |                     |
	|         | -p download-only-778826        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.33.1 | 16 Aug 24 17:42 UTC | 16 Aug 24 17:42 UTC |
	| delete  | -p download-only-778826        | download-only-778826 | jenkins | v1.33.1 | 16 Aug 24 17:42 UTC | 16 Aug 24 17:42 UTC |
	| start   | -o=json --download-only        | download-only-528021 | jenkins | v1.33.1 | 16 Aug 24 17:42 UTC |                     |
	|         | -p download-only-528021        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0   |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/16 17:42:27
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.22.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0816 17:42:27.379468  293577 out.go:345] Setting OutFile to fd 1 ...
	I0816 17:42:27.379621  293577 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 17:42:27.379632  293577 out.go:358] Setting ErrFile to fd 2...
	I0816 17:42:27.379637  293577 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 17:42:27.379880  293577 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19461-287979/.minikube/bin
	I0816 17:42:27.380280  293577 out.go:352] Setting JSON to true
	I0816 17:42:27.381200  293577 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":5077,"bootTime":1723825070,"procs":173,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0816 17:42:27.381274  293577 start.go:139] virtualization:  
	I0816 17:42:27.383790  293577 out.go:97] [download-only-528021] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	I0816 17:42:27.383986  293577 notify.go:220] Checking for updates...
	I0816 17:42:27.386008  293577 out.go:169] MINIKUBE_LOCATION=19461
	I0816 17:42:27.388034  293577 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0816 17:42:27.389935  293577 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19461-287979/kubeconfig
	I0816 17:42:27.391815  293577 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19461-287979/.minikube
	I0816 17:42:27.393978  293577 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W0816 17:42:27.398366  293577 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0816 17:42:27.398641  293577 driver.go:392] Setting default libvirt URI to qemu:///system
	I0816 17:42:27.426625  293577 docker.go:123] docker version: linux-27.1.2:Docker Engine - Community
	I0816 17:42:27.426724  293577 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0816 17:42:27.480441  293577 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:44 SystemTime:2024-08-16 17:42:27.470642101 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1067-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214900736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8fc6bcff51318944179630522a095cc9dbf9f353 Expected:8fc6bcff51318944179630522a095cc9dbf9f353} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1]] Warnings:<nil>}}
	I0816 17:42:27.480549  293577 docker.go:307] overlay module found
	I0816 17:42:27.482994  293577 out.go:97] Using the docker driver based on user configuration
	I0816 17:42:27.483020  293577 start.go:297] selected driver: docker
	I0816 17:42:27.483026  293577 start.go:901] validating driver "docker" against <nil>
	I0816 17:42:27.483144  293577 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0816 17:42:27.536810  293577 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:44 SystemTime:2024-08-16 17:42:27.52777758 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1067-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aarc
h64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214900736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8fc6bcff51318944179630522a095cc9dbf9f353 Expected:8fc6bcff51318944179630522a095cc9dbf9f353} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerError
s:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1]] Warnings:<nil>}}
	I0816 17:42:27.537073  293577 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0816 17:42:27.537357  293577 start_flags.go:393] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I0816 17:42:27.537604  293577 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0816 17:42:27.539816  293577 out.go:169] Using Docker driver with root privileges
	I0816 17:42:27.541521  293577 cni.go:84] Creating CNI manager for ""
	I0816 17:42:27.541544  293577 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0816 17:42:27.541555  293577 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0816 17:42:27.541634  293577 start.go:340] cluster config:
	{Name:download-only-528021 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:download-only-528021 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0816 17:42:27.543638  293577 out.go:97] Starting "download-only-528021" primary control-plane node in "download-only-528021" cluster
	I0816 17:42:27.543669  293577 cache.go:121] Beginning downloading kic base image for docker with containerd
	I0816 17:42:27.545693  293577 out.go:97] Pulling base image v0.0.44-1723740748-19452 ...
	I0816 17:42:27.545729  293577 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime containerd
	I0816 17:42:27.545864  293577 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d in local docker daemon
	I0816 17:42:27.561674  293577 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d to local cache
	I0816 17:42:27.561804  293577 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d in local cache directory
	I0816 17:42:27.561839  293577 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d in local cache directory, skipping pull
	I0816 17:42:27.561845  293577 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d exists in cache, skipping pull
	I0816 17:42:27.561865  293577 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d as a tarball
	I0816 17:42:27.606245  293577 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.0/preloaded-images-k8s-v18-v1.31.0-containerd-overlay2-arm64.tar.lz4
	I0816 17:42:27.606274  293577 cache.go:56] Caching tarball of preloaded images
	I0816 17:42:27.606441  293577 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime containerd
	I0816 17:42:27.608520  293577 out.go:97] Downloading Kubernetes v1.31.0 preload ...
	I0816 17:42:27.608546  293577 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.31.0-containerd-overlay2-arm64.tar.lz4 ...
	I0816 17:42:27.694798  293577 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.0/preloaded-images-k8s-v18-v1.31.0-containerd-overlay2-arm64.tar.lz4?checksum=md5:ea65ad5fd42227e06b9323ff45647208 -> /home/jenkins/minikube-integration/19461-287979/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-containerd-overlay2-arm64.tar.lz4
	
	
	* The control-plane node download-only-528021 host does not exist
	  To start a cluster, run: "minikube start -p download-only-528021"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/DeleteAll (0.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.31.0/DeleteAll (0.21s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-528021
--- PASS: TestDownloadOnly/v1.31.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestBinaryMirror (0.56s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p binary-mirror-779675 --alsologtostderr --binary-mirror http://127.0.0.1:46441 --driver=docker  --container-runtime=containerd
helpers_test.go:175: Cleaning up "binary-mirror-779675" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p binary-mirror-779675
--- PASS: TestBinaryMirror (0.56s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1037: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-864899
addons_test.go:1037: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable dashboard -p addons-864899: exit status 85 (73.044625ms)

                                                
                                                
-- stdout --
	* Profile "addons-864899" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-864899"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1048: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-864899
addons_test.go:1048: (dbg) Non-zero exit: out/minikube-linux-arm64 addons disable dashboard -p addons-864899: exit status 85 (71.28264ms)

                                                
                                                
-- stdout --
	* Profile "addons-864899" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-864899"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/Setup (217.21s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:110: (dbg) Run:  out/minikube-linux-arm64 start -p addons-864899 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=docker  --container-runtime=containerd --addons=ingress --addons=ingress-dns
addons_test.go:110: (dbg) Done: out/minikube-linux-arm64 start -p addons-864899 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=docker  --container-runtime=containerd --addons=ingress --addons=ingress-dns: (3m37.21108368s)
--- PASS: TestAddons/Setup (217.21s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.18s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:656: (dbg) Run:  kubectl --context addons-864899 create ns new-namespace
addons_test.go:670: (dbg) Run:  kubectl --context addons-864899 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.18s)

                                                
                                    
x
+
TestAddons/parallel/Registry (15.57s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:332: registry stabilized in 3.540448ms
addons_test.go:334: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-6fb4cdfc84-pvhcl" [67686d20-79b3-4c3c-b120-f96dfea3fa24] Running
addons_test.go:334: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.010149453s
addons_test.go:337: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-hftrd" [0e434063-8781-41ea-8c5b-c43130270841] Running
addons_test.go:337: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.004196652s
addons_test.go:342: (dbg) Run:  kubectl --context addons-864899 delete po -l run=registry-test --now
addons_test.go:347: (dbg) Run:  kubectl --context addons-864899 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:347: (dbg) Done: kubectl --context addons-864899 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (3.411613161s)
addons_test.go:361: (dbg) Run:  out/minikube-linux-arm64 -p addons-864899 ip
2024/08/16 17:50:07 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:390: (dbg) Run:  out/minikube-linux-arm64 -p addons-864899 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (15.57s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (20.26s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-864899 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-864899 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-864899 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [30c6cc7a-b6f1-4b48-9641-25c3de52ae92] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [30c6cc7a-b6f1-4b48-9641-25c3de52ae92] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 9.003387492s
addons_test.go:264: (dbg) Run:  out/minikube-linux-arm64 -p addons-864899 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:288: (dbg) Run:  kubectl --context addons-864899 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-linux-arm64 -p addons-864899 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:308: (dbg) Run:  out/minikube-linux-arm64 -p addons-864899 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:308: (dbg) Done: out/minikube-linux-arm64 -p addons-864899 addons disable ingress-dns --alsologtostderr -v=1: (1.830681366s)
addons_test.go:313: (dbg) Run:  out/minikube-linux-arm64 -p addons-864899 addons disable ingress --alsologtostderr -v=1
addons_test.go:313: (dbg) Done: out/minikube-linux-arm64 -p addons-864899 addons disable ingress --alsologtostderr -v=1: (7.79872368s)
--- PASS: TestAddons/parallel/Ingress (20.26s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (11.07s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:848: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-tcnmn" [0cc0b8e3-9aeb-405d-9f9c-4a37b75275cf] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:848: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.004728243s
addons_test.go:851: (dbg) Run:  out/minikube-linux-arm64 addons disable inspektor-gadget -p addons-864899
addons_test.go:851: (dbg) Done: out/minikube-linux-arm64 addons disable inspektor-gadget -p addons-864899: (6.067046739s)
--- PASS: TestAddons/parallel/InspektorGadget (11.07s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.78s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:409: metrics-server stabilized in 3.284941ms
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-8988944d9-7x26w" [b9b0a4cd-3c28-4e65-bf08-6b2a89a4fdde] Running
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.004721161s
addons_test.go:417: (dbg) Run:  kubectl --context addons-864899 top pods -n kube-system
addons_test.go:434: (dbg) Run:  out/minikube-linux-arm64 -p addons-864899 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.78s)

                                                
                                    
x
+
TestAddons/parallel/CSI (48.32s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:567: csi-hostpath-driver pods stabilized in 5.96543ms
addons_test.go:570: (dbg) Run:  kubectl --context addons-864899 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:575: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-864899 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-864899 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-864899 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:580: (dbg) Run:  kubectl --context addons-864899 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:585: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [1d458b46-a807-4a93-92cb-c0b5da459e40] Pending
helpers_test.go:344: "task-pv-pod" [1d458b46-a807-4a93-92cb-c0b5da459e40] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [1d458b46-a807-4a93-92cb-c0b5da459e40] Running
addons_test.go:585: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 10.003524689s
addons_test.go:590: (dbg) Run:  kubectl --context addons-864899 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:595: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-864899 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-864899 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:600: (dbg) Run:  kubectl --context addons-864899 delete pod task-pv-pod
addons_test.go:600: (dbg) Done: kubectl --context addons-864899 delete pod task-pv-pod: (1.205090501s)
addons_test.go:606: (dbg) Run:  kubectl --context addons-864899 delete pvc hpvc
addons_test.go:612: (dbg) Run:  kubectl --context addons-864899 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:617: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-864899 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-864899 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-864899 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-864899 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-864899 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-864899 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-864899 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-864899 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-864899 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-864899 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-864899 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-864899 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-864899 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-864899 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-864899 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-864899 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-864899 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-864899 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:622: (dbg) Run:  kubectl --context addons-864899 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:627: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [fc5d0ea3-0c8e-49e9-9336-42c9b1f1b037] Pending
helpers_test.go:344: "task-pv-pod-restore" [fc5d0ea3-0c8e-49e9-9336-42c9b1f1b037] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [fc5d0ea3-0c8e-49e9-9336-42c9b1f1b037] Running
addons_test.go:627: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.003935657s
addons_test.go:632: (dbg) Run:  kubectl --context addons-864899 delete pod task-pv-pod-restore
addons_test.go:636: (dbg) Run:  kubectl --context addons-864899 delete pvc hpvc-restore
addons_test.go:640: (dbg) Run:  kubectl --context addons-864899 delete volumesnapshot new-snapshot-demo
addons_test.go:644: (dbg) Run:  out/minikube-linux-arm64 -p addons-864899 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:644: (dbg) Done: out/minikube-linux-arm64 -p addons-864899 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.840828432s)
addons_test.go:648: (dbg) Run:  out/minikube-linux-arm64 -p addons-864899 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (48.32s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (16.36s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:830: (dbg) Run:  out/minikube-linux-arm64 addons enable headlamp -p addons-864899 --alsologtostderr -v=1
addons_test.go:830: (dbg) Done: out/minikube-linux-arm64 addons enable headlamp -p addons-864899 --alsologtostderr -v=1: (1.583017142s)
addons_test.go:835: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-57fb76fcdb-jvlb2" [364eae6f-a601-41dc-a26f-45e35734a4a4] Pending
helpers_test.go:344: "headlamp-57fb76fcdb-jvlb2" [364eae6f-a601-41dc-a26f-45e35734a4a4] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-57fb76fcdb-jvlb2" [364eae6f-a601-41dc-a26f-45e35734a4a4] Running
addons_test.go:835: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 9.004424599s
addons_test.go:839: (dbg) Run:  out/minikube-linux-arm64 -p addons-864899 addons disable headlamp --alsologtostderr -v=1
addons_test.go:839: (dbg) Done: out/minikube-linux-arm64 -p addons-864899 addons disable headlamp --alsologtostderr -v=1: (5.767009287s)
--- PASS: TestAddons/parallel/Headlamp (16.36s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (6.7s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:867: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-c4bc9b5f8-6np5w" [1cc92043-582f-41b2-8ff1-9dd183dcac1e] Running
addons_test.go:867: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 6.003433627s
addons_test.go:870: (dbg) Run:  out/minikube-linux-arm64 addons disable cloud-spanner -p addons-864899
--- PASS: TestAddons/parallel/CloudSpanner (6.70s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (53.03s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:982: (dbg) Run:  kubectl --context addons-864899 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:988: (dbg) Run:  kubectl --context addons-864899 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:992: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-864899 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-864899 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-864899 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-864899 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-864899 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-864899 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:995: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [1e17092c-b2c0-401d-aff1-0405540e5fb8] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [1e17092c-b2c0-401d-aff1-0405540e5fb8] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [1e17092c-b2c0-401d-aff1-0405540e5fb8] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:995: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 3.004441252s
addons_test.go:1000: (dbg) Run:  kubectl --context addons-864899 get pvc test-pvc -o=json
addons_test.go:1009: (dbg) Run:  out/minikube-linux-arm64 -p addons-864899 ssh "cat /opt/local-path-provisioner/pvc-b922269f-9fb7-45d8-b84e-9b10e555c0ea_default_test-pvc/file1"
addons_test.go:1021: (dbg) Run:  kubectl --context addons-864899 delete pod test-local-path
addons_test.go:1025: (dbg) Run:  kubectl --context addons-864899 delete pvc test-pvc
addons_test.go:1029: (dbg) Run:  out/minikube-linux-arm64 -p addons-864899 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1029: (dbg) Done: out/minikube-linux-arm64 -p addons-864899 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.571377062s)
--- PASS: TestAddons/parallel/LocalPath (53.03s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.67s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1061: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-k9vv2" [1abed4b9-c96b-4ed0-9de5-2035c284afa5] Running
addons_test.go:1061: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.004714794s
addons_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 addons disable nvidia-device-plugin -p addons-864899
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.67s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (12.12s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1072: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-67d98fc6b-b44t7" [e311a14a-4669-4065-be6b-d055bcc1af30] Running
addons_test.go:1072: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.003690456s
addons_test.go:1076: (dbg) Run:  out/minikube-linux-arm64 -p addons-864899 addons disable yakd --alsologtostderr -v=1
addons_test.go:1076: (dbg) Done: out/minikube-linux-arm64 -p addons-864899 addons disable yakd --alsologtostderr -v=1: (6.118783636s)
--- PASS: TestAddons/parallel/Yakd (12.12s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.29s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:174: (dbg) Run:  out/minikube-linux-arm64 stop -p addons-864899
addons_test.go:174: (dbg) Done: out/minikube-linux-arm64 stop -p addons-864899: (12.032665408s)
addons_test.go:178: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-864899
addons_test.go:182: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-864899
addons_test.go:187: (dbg) Run:  out/minikube-linux-arm64 addons disable gvisor -p addons-864899
--- PASS: TestAddons/StoppedEnableDisable (12.29s)

                                                
                                    
x
+
TestCertOptions (35.21s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-arm64 start -p cert-options-650240 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd
cert_options_test.go:49: (dbg) Done: out/minikube-linux-arm64 start -p cert-options-650240 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd: (32.538959925s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-arm64 -p cert-options-650240 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-650240 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-arm64 ssh -p cert-options-650240 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-650240" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-options-650240
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-options-650240: (2.008745274s)
--- PASS: TestCertOptions (35.21s)

                                                
                                    
x
+
TestCertExpiration (229.59s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-037923 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=containerd
cert_options_test.go:123: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-037923 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=containerd: (38.696664377s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-037923 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=containerd
E0816 18:30:23.312048  293371 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-287979/.minikube/profiles/functional-700256/client.crt: no such file or directory" logger="UnhandledError"
cert_options_test.go:131: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-037923 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=containerd: (8.635264339s)
helpers_test.go:175: Cleaning up "cert-expiration-037923" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-expiration-037923
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-expiration-037923: (2.260177772s)
--- PASS: TestCertExpiration (229.59s)

                                                
                                    
x
+
TestForceSystemdFlag (38.19s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-flag-323200 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
E0816 18:26:13.788142  293371 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-287979/.minikube/profiles/addons-864899/client.crt: no such file or directory" logger="UnhandledError"
docker_test.go:91: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-flag-323200 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (35.876629195s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-flag-323200 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-flag-323200" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-flag-323200
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-flag-323200: (2.022156682s)
--- PASS: TestForceSystemdFlag (38.19s)

                                                
                                    
x
+
TestForceSystemdEnv (42.74s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-env-140539 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
docker_test.go:155: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-env-140539 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (39.899549543s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-env-140539 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-env-140539" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-env-140539
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-env-140539: (2.451568477s)
--- PASS: TestForceSystemdEnv (42.74s)

                                                
                                    
x
+
TestDockerEnvContainerd (43.78s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with containerd true linux arm64
docker_test.go:181: (dbg) Run:  out/minikube-linux-arm64 start -p dockerenv-912069 --driver=docker  --container-runtime=containerd
docker_test.go:181: (dbg) Done: out/minikube-linux-arm64 start -p dockerenv-912069 --driver=docker  --container-runtime=containerd: (28.277850633s)
docker_test.go:189: (dbg) Run:  /bin/bash -c "out/minikube-linux-arm64 docker-env --ssh-host --ssh-add -p dockerenv-912069"
docker_test.go:220: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-tsKucGTVScXA/agent.312475" SSH_AGENT_PID="312476" DOCKER_HOST=ssh://docker@127.0.0.1:33145 docker version"
docker_test.go:243: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-tsKucGTVScXA/agent.312475" SSH_AGENT_PID="312476" DOCKER_HOST=ssh://docker@127.0.0.1:33145 DOCKER_BUILDKIT=0 docker build -t local/minikube-dockerenv-containerd-test:latest testdata/docker-env"
docker_test.go:243: (dbg) Done: /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-tsKucGTVScXA/agent.312475" SSH_AGENT_PID="312476" DOCKER_HOST=ssh://docker@127.0.0.1:33145 DOCKER_BUILDKIT=0 docker build -t local/minikube-dockerenv-containerd-test:latest testdata/docker-env": (1.069449187s)
docker_test.go:250: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-tsKucGTVScXA/agent.312475" SSH_AGENT_PID="312476" DOCKER_HOST=ssh://docker@127.0.0.1:33145 docker image ls"
docker_test.go:250: (dbg) Done: /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-tsKucGTVScXA/agent.312475" SSH_AGENT_PID="312476" DOCKER_HOST=ssh://docker@127.0.0.1:33145 docker image ls": (1.005074505s)
helpers_test.go:175: Cleaning up "dockerenv-912069" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p dockerenv-912069
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p dockerenv-912069: (1.960806127s)
--- PASS: TestDockerEnvContainerd (43.78s)

                                                
                                    
x
+
TestErrorSpam/setup (32.47s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -p nospam-273925 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-273925 --driver=docker  --container-runtime=containerd
error_spam_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -p nospam-273925 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-273925 --driver=docker  --container-runtime=containerd: (32.466274757s)
--- PASS: TestErrorSpam/setup (32.47s)

                                                
                                    
x
+
TestErrorSpam/start (0.77s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-273925 --log_dir /tmp/nospam-273925 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-273925 --log_dir /tmp/nospam-273925 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-273925 --log_dir /tmp/nospam-273925 start --dry-run
--- PASS: TestErrorSpam/start (0.77s)

                                                
                                    
x
+
TestErrorSpam/status (1.08s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-273925 --log_dir /tmp/nospam-273925 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-273925 --log_dir /tmp/nospam-273925 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-273925 --log_dir /tmp/nospam-273925 status
--- PASS: TestErrorSpam/status (1.08s)

                                                
                                    
x
+
TestErrorSpam/pause (1.76s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-273925 --log_dir /tmp/nospam-273925 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-273925 --log_dir /tmp/nospam-273925 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-273925 --log_dir /tmp/nospam-273925 pause
--- PASS: TestErrorSpam/pause (1.76s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.85s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-273925 --log_dir /tmp/nospam-273925 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-273925 --log_dir /tmp/nospam-273925 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-273925 --log_dir /tmp/nospam-273925 unpause
--- PASS: TestErrorSpam/unpause (1.85s)

                                                
                                    
x
+
TestErrorSpam/stop (1.49s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-273925 --log_dir /tmp/nospam-273925 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-arm64 -p nospam-273925 --log_dir /tmp/nospam-273925 stop: (1.276837212s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-273925 --log_dir /tmp/nospam-273925 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-273925 --log_dir /tmp/nospam-273925 stop
--- PASS: TestErrorSpam/stop (1.49s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1855: local sync path: /home/jenkins/minikube-integration/19461-287979/.minikube/files/etc/test/nested/copy/293371/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (46.83s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2234: (dbg) Run:  out/minikube-linux-arm64 start -p functional-700256 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd
functional_test.go:2234: (dbg) Done: out/minikube-linux-arm64 start -p functional-700256 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd: (46.824760837s)
--- PASS: TestFunctional/serial/StartWithProxy (46.83s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (6.52s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:659: (dbg) Run:  out/minikube-linux-arm64 start -p functional-700256 --alsologtostderr -v=8
functional_test.go:659: (dbg) Done: out/minikube-linux-arm64 start -p functional-700256 --alsologtostderr -v=8: (6.513681704s)
functional_test.go:663: soft start took 6.516293065s for "functional-700256" cluster.
--- PASS: TestFunctional/serial/SoftStart (6.52s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:681: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.07s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:696: (dbg) Run:  kubectl --context functional-700256 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.09s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (4.19s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1049: (dbg) Run:  out/minikube-linux-arm64 -p functional-700256 cache add registry.k8s.io/pause:3.1
functional_test.go:1049: (dbg) Done: out/minikube-linux-arm64 -p functional-700256 cache add registry.k8s.io/pause:3.1: (1.613283762s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-arm64 -p functional-700256 cache add registry.k8s.io/pause:3.3
functional_test.go:1049: (dbg) Done: out/minikube-linux-arm64 -p functional-700256 cache add registry.k8s.io/pause:3.3: (1.347485434s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-arm64 -p functional-700256 cache add registry.k8s.io/pause:latest
functional_test.go:1049: (dbg) Done: out/minikube-linux-arm64 -p functional-700256 cache add registry.k8s.io/pause:latest: (1.233878277s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (4.19s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.29s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1077: (dbg) Run:  docker build -t minikube-local-cache-test:functional-700256 /tmp/TestFunctionalserialCacheCmdcacheadd_local2769903483/001
functional_test.go:1089: (dbg) Run:  out/minikube-linux-arm64 -p functional-700256 cache add minikube-local-cache-test:functional-700256
functional_test.go:1094: (dbg) Run:  out/minikube-linux-arm64 -p functional-700256 cache delete minikube-local-cache-test:functional-700256
functional_test.go:1083: (dbg) Run:  docker rmi minikube-local-cache-test:functional-700256
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.29s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1102: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1110: (dbg) Run:  out/minikube-linux-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.3s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1124: (dbg) Run:  out/minikube-linux-arm64 -p functional-700256 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.30s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (2s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1147: (dbg) Run:  out/minikube-linux-arm64 -p functional-700256 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Run:  out/minikube-linux-arm64 -p functional-700256 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-700256 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (274.96883ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1158: (dbg) Run:  out/minikube-linux-arm64 -p functional-700256 cache reload
functional_test.go:1158: (dbg) Done: out/minikube-linux-arm64 -p functional-700256 cache reload: (1.118691041s)
functional_test.go:1163: (dbg) Run:  out/minikube-linux-arm64 -p functional-700256 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (2.00s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1172: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1172: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.10s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:716: (dbg) Run:  out/minikube-linux-arm64 -p functional-700256 kubectl -- --context functional-700256 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.14s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:741: (dbg) Run:  out/kubectl --context functional-700256 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.14s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (47.82s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:757: (dbg) Run:  out/minikube-linux-arm64 start -p functional-700256 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:757: (dbg) Done: out/minikube-linux-arm64 start -p functional-700256 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (47.820168301s)
functional_test.go:761: restart took 47.820277396s for "functional-700256" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (47.82s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:810: (dbg) Run:  kubectl --context functional-700256 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:825: etcd phase: Running
functional_test.go:835: etcd status: Ready
functional_test.go:825: kube-apiserver phase: Running
functional_test.go:835: kube-apiserver status: Ready
functional_test.go:825: kube-controller-manager phase: Running
functional_test.go:835: kube-controller-manager status: Ready
functional_test.go:825: kube-scheduler phase: Running
functional_test.go:835: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.10s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.65s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1236: (dbg) Run:  out/minikube-linux-arm64 -p functional-700256 logs
functional_test.go:1236: (dbg) Done: out/minikube-linux-arm64 -p functional-700256 logs: (1.650003728s)
--- PASS: TestFunctional/serial/LogsCmd (1.65s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.63s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1250: (dbg) Run:  out/minikube-linux-arm64 -p functional-700256 logs --file /tmp/TestFunctionalserialLogsFileCmd4006146801/001/logs.txt
functional_test.go:1250: (dbg) Done: out/minikube-linux-arm64 -p functional-700256 logs --file /tmp/TestFunctionalserialLogsFileCmd4006146801/001/logs.txt: (1.633091164s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.63s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.48s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2321: (dbg) Run:  kubectl --context functional-700256 apply -f testdata/invalidsvc.yaml
functional_test.go:2335: (dbg) Run:  out/minikube-linux-arm64 service invalid-svc -p functional-700256
functional_test.go:2335: (dbg) Non-zero exit: out/minikube-linux-arm64 service invalid-svc -p functional-700256: exit status 115 (593.76088ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|---------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL            |
	|-----------|-------------|-------------|---------------------------|
	| default   | invalid-svc |          80 | http://192.168.49.2:31268 |
	|-----------|-------------|-------------|---------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2327: (dbg) Run:  kubectl --context functional-700256 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.48s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-700256 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-700256 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-700256 config get cpus: exit status 14 (76.198686ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-700256 config set cpus 2
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-700256 config get cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-700256 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-700256 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-700256 config get cpus: exit status 14 (68.838244ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (9.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:905: (dbg) daemon: [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-700256 --alsologtostderr -v=1]
functional_test.go:910: (dbg) stopping [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-700256 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 327263: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (9.65s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:974: (dbg) Run:  out/minikube-linux-arm64 start -p functional-700256 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd
functional_test.go:974: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-700256 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd: exit status 23 (272.08269ms)

                                                
                                                
-- stdout --
	* [functional-700256] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19461
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19461-287979/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19461-287979/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0816 17:55:53.053514  326833 out.go:345] Setting OutFile to fd 1 ...
	I0816 17:55:53.053638  326833 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 17:55:53.053643  326833 out.go:358] Setting ErrFile to fd 2...
	I0816 17:55:53.053648  326833 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 17:55:53.053930  326833 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19461-287979/.minikube/bin
	I0816 17:55:53.054284  326833 out.go:352] Setting JSON to false
	I0816 17:55:53.055223  326833 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":5883,"bootTime":1723825070,"procs":209,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0816 17:55:53.055286  326833 start.go:139] virtualization:  
	I0816 17:55:53.057597  326833 out.go:177] * [functional-700256] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	I0816 17:55:53.060564  326833 out.go:177]   - MINIKUBE_LOCATION=19461
	I0816 17:55:53.060765  326833 notify.go:220] Checking for updates...
	I0816 17:55:53.063963  326833 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0816 17:55:53.066284  326833 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19461-287979/kubeconfig
	I0816 17:55:53.067946  326833 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19461-287979/.minikube
	I0816 17:55:53.071762  326833 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0816 17:55:53.073883  326833 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0816 17:55:53.076494  326833 config.go:182] Loaded profile config "functional-700256": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.0
	I0816 17:55:53.077171  326833 driver.go:392] Setting default libvirt URI to qemu:///system
	I0816 17:55:53.133597  326833 docker.go:123] docker version: linux-27.1.2:Docker Engine - Community
	I0816 17:55:53.133726  326833 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0816 17:55:53.235712  326833 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:32 OomKillDisable:true NGoroutines:52 SystemTime:2024-08-16 17:55:53.223158537 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1067-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214900736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8fc6bcff51318944179630522a095cc9dbf9f353 Expected:8fc6bcff51318944179630522a095cc9dbf9f353} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1]] Warnings:<nil>}}
	I0816 17:55:53.235831  326833 docker.go:307] overlay module found
	I0816 17:55:53.238388  326833 out.go:177] * Using the docker driver based on existing profile
	I0816 17:55:53.240140  326833 start.go:297] selected driver: docker
	I0816 17:55:53.240157  326833 start.go:901] validating driver "docker" against &{Name:functional-700256 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:functional-700256 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.31.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:do
cker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0816 17:55:53.240317  326833 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0816 17:55:53.245077  326833 out.go:201] 
	W0816 17:55:53.247102  326833 out.go:270] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0816 17:55:53.248575  326833 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:991: (dbg) Run:  out/minikube-linux-arm64 start -p functional-700256 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
--- PASS: TestFunctional/parallel/DryRun (0.56s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1020: (dbg) Run:  out/minikube-linux-arm64 start -p functional-700256 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd
functional_test.go:1020: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-700256 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd: exit status 23 (204.209127ms)

                                                
                                                
-- stdout --
	* [functional-700256] minikube v1.33.1 sur Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19461
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19461-287979/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19461-287979/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0816 17:55:52.838266  326788 out.go:345] Setting OutFile to fd 1 ...
	I0816 17:55:52.838455  326788 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 17:55:52.838484  326788 out.go:358] Setting ErrFile to fd 2...
	I0816 17:55:52.838505  326788 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 17:55:52.838902  326788 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19461-287979/.minikube/bin
	I0816 17:55:52.839311  326788 out.go:352] Setting JSON to false
	I0816 17:55:52.840454  326788 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":5883,"bootTime":1723825070,"procs":209,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0816 17:55:52.840558  326788 start.go:139] virtualization:  
	I0816 17:55:52.843055  326788 out.go:177] * [functional-700256] minikube v1.33.1 sur Ubuntu 20.04 (arm64)
	I0816 17:55:52.845068  326788 out.go:177]   - MINIKUBE_LOCATION=19461
	I0816 17:55:52.845148  326788 notify.go:220] Checking for updates...
	I0816 17:55:52.848624  326788 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0816 17:55:52.850535  326788 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19461-287979/kubeconfig
	I0816 17:55:52.852186  326788 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19461-287979/.minikube
	I0816 17:55:52.854076  326788 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0816 17:55:52.855809  326788 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0816 17:55:52.857917  326788 config.go:182] Loaded profile config "functional-700256": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.0
	I0816 17:55:52.858446  326788 driver.go:392] Setting default libvirt URI to qemu:///system
	I0816 17:55:52.893869  326788 docker.go:123] docker version: linux-27.1.2:Docker Engine - Community
	I0816 17:55:52.894002  326788 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0816 17:55:52.966755  326788 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:32 OomKillDisable:true NGoroutines:52 SystemTime:2024-08-16 17:55:52.956621077 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1067-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214900736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8fc6bcff51318944179630522a095cc9dbf9f353 Expected:8fc6bcff51318944179630522a095cc9dbf9f353} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1]] Warnings:<nil>}}
	I0816 17:55:52.966874  326788 docker.go:307] overlay module found
	I0816 17:55:52.968875  326788 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I0816 17:55:52.970638  326788 start.go:297] selected driver: docker
	I0816 17:55:52.970662  326788 start.go:901] validating driver "docker" against &{Name:functional-700256 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:functional-700256 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.31.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:do
cker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0816 17:55:52.970772  326788 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0816 17:55:52.973172  326788 out.go:201] 
	W0816 17:55:52.975280  326788 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0816 17:55:52.977169  326788 out.go:201] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:854: (dbg) Run:  out/minikube-linux-arm64 -p functional-700256 status
functional_test.go:860: (dbg) Run:  out/minikube-linux-arm64 -p functional-700256 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:872: (dbg) Run:  out/minikube-linux-arm64 -p functional-700256 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.14s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (10.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1627: (dbg) Run:  kubectl --context functional-700256 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1635: (dbg) Run:  kubectl --context functional-700256 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-65d86f57f4-jw9zd" [159aea88-73c6-422d-a024-e0f01cd63849] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-connect-65d86f57f4-jw9zd" [159aea88-73c6-422d-a024-e0f01cd63849] Running
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 10.004186816s
functional_test.go:1649: (dbg) Run:  out/minikube-linux-arm64 -p functional-700256 service hello-node-connect --url
functional_test.go:1655: found endpoint for hello-node-connect: http://192.168.49.2:31541
functional_test.go:1675: http://192.168.49.2:31541: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-65d86f57f4-jw9zd

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.49.2:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.49.2:31541
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (10.69s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1690: (dbg) Run:  out/minikube-linux-arm64 -p functional-700256 addons list
functional_test.go:1702: (dbg) Run:  out/minikube-linux-arm64 -p functional-700256 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (25.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [f7edd388-40fd-41fd-a54d-97aca8a864ea] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.008358414s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-700256 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-700256 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-700256 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-700256 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [341a2edc-6d31-42ad-89f0-0ede7c745290] Pending
helpers_test.go:344: "sp-pod" [341a2edc-6d31-42ad-89f0-0ede7c745290] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [341a2edc-6d31-42ad-89f0-0ede7c745290] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 11.003084963s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-700256 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-700256 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-700256 delete -f testdata/storage-provisioner/pod.yaml: (1.058605766s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-700256 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [fe91c748-c39d-43e2-b8f5-e894e525ff1d] Pending
helpers_test.go:344: "sp-pod" [fe91c748-c39d-43e2-b8f5-e894e525ff1d] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.003449719s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-700256 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (25.04s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1725: (dbg) Run:  out/minikube-linux-arm64 -p functional-700256 ssh "echo hello"
functional_test.go:1742: (dbg) Run:  out/minikube-linux-arm64 -p functional-700256 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.65s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (2.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-700256 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-700256 ssh -n functional-700256 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-700256 cp functional-700256:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd1763241544/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-700256 ssh -n functional-700256 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-700256 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-700256 ssh -n functional-700256 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (2.32s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1929: Checking for existence of /etc/test/nested/copy/293371/hosts within VM
functional_test.go:1931: (dbg) Run:  out/minikube-linux-arm64 -p functional-700256 ssh "sudo cat /etc/test/nested/copy/293371/hosts"
functional_test.go:1936: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (2.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1972: Checking for existence of /etc/ssl/certs/293371.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-arm64 -p functional-700256 ssh "sudo cat /etc/ssl/certs/293371.pem"
functional_test.go:1972: Checking for existence of /usr/share/ca-certificates/293371.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-arm64 -p functional-700256 ssh "sudo cat /usr/share/ca-certificates/293371.pem"
functional_test.go:1972: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-arm64 -p functional-700256 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/2933712.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-arm64 -p functional-700256 ssh "sudo cat /etc/ssl/certs/2933712.pem"
functional_test.go:1999: Checking for existence of /usr/share/ca-certificates/2933712.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-arm64 -p functional-700256 ssh "sudo cat /usr/share/ca-certificates/2933712.pem"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-arm64 -p functional-700256 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (2.09s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:219: (dbg) Run:  kubectl --context functional-700256 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.76s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2027: (dbg) Run:  out/minikube-linux-arm64 -p functional-700256 ssh "sudo systemctl is-active docker"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-700256 ssh "sudo systemctl is-active docker": exit status 1 (460.457024ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2027: (dbg) Run:  out/minikube-linux-arm64 -p functional-700256 ssh "sudo systemctl is-active crio"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-700256 ssh "sudo systemctl is-active crio": exit status 1 (300.029173ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.76s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2288: (dbg) Run:  out/minikube-linux-arm64 license
--- PASS: TestFunctional/parallel/License (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-700256 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-700256 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-700256 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-700256 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 324278: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.61s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-arm64 -p functional-700256 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (8.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-700256 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [3ab9fb1d-d3ea-47f6-b5dd-68691c5e960b] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [3ab9fb1d-d3ea-47f6-b5dd-68691c5e960b] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 8.003556984s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (8.40s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-700256 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.102.130.123 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-arm64 -p functional-700256 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (6.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1437: (dbg) Run:  kubectl --context functional-700256 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1445: (dbg) Run:  kubectl --context functional-700256 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-64b4f8f9ff-hktql" [659ca443-2a7f-439b-8a95-5459f4c5eaa8] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-64b4f8f9ff-hktql" [659ca443-2a7f-439b-8a95-5459f4c5eaa8] Running
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 6.004760195s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (6.24s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1270: (dbg) Run:  out/minikube-linux-arm64 profile lis
functional_test.go:1275: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1459: (dbg) Run:  out/minikube-linux-arm64 -p functional-700256 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.56s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1310: (dbg) Run:  out/minikube-linux-arm64 profile list
functional_test.go:1315: Took "360.845668ms" to run "out/minikube-linux-arm64 profile list"
functional_test.go:1324: (dbg) Run:  out/minikube-linux-arm64 profile list -l
functional_test.go:1329: Took "58.578967ms" to run "out/minikube-linux-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1489: (dbg) Run:  out/minikube-linux-arm64 -p functional-700256 service list -o json
functional_test.go:1494: Took "549.0608ms" to run "out/minikube-linux-arm64 -p functional-700256 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.55s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1361: (dbg) Run:  out/minikube-linux-arm64 profile list -o json
functional_test.go:1366: Took "381.322449ms" to run "out/minikube-linux-arm64 profile list -o json"
functional_test.go:1374: (dbg) Run:  out/minikube-linux-arm64 profile list -o json --light
functional_test.go:1379: Took "93.674809ms" to run "out/minikube-linux-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1509: (dbg) Run:  out/minikube-linux-arm64 -p functional-700256 service --namespace=default --https --url hello-node
functional_test.go:1522: found endpoint: https://192.168.49.2:32307
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (5.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-700256 /tmp/TestFunctionalparallelMountCmdany-port3619852121/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1723830950340631169" to /tmp/TestFunctionalparallelMountCmdany-port3619852121/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1723830950340631169" to /tmp/TestFunctionalparallelMountCmdany-port3619852121/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1723830950340631169" to /tmp/TestFunctionalparallelMountCmdany-port3619852121/001/test-1723830950340631169
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-700256 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-arm64 -p functional-700256 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Aug 16 17:55 created-by-test
-rw-r--r-- 1 docker docker 24 Aug 16 17:55 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Aug 16 17:55 test-1723830950340631169
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-arm64 -p functional-700256 ssh cat /mount-9p/test-1723830950340631169
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-700256 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [cefb2bf8-d484-435c-9846-06e8d4944b98] Pending
helpers_test.go:344: "busybox-mount" [cefb2bf8-d484-435c-9846-06e8d4944b98] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [cefb2bf8-d484-435c-9846-06e8d4944b98] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [cefb2bf8-d484-435c-9846-06e8d4944b98] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 3.004298911s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-700256 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-700256 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-700256 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-arm64 -p functional-700256 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-700256 /tmp/TestFunctionalparallelMountCmdany-port3619852121/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (5.68s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1540: (dbg) Run:  out/minikube-linux-arm64 -p functional-700256 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1559: (dbg) Run:  out/minikube-linux-arm64 -p functional-700256 service hello-node --url
functional_test.go:1565: found endpoint for hello-node: http://192.168.49.2:32307
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-700256 /tmp/TestFunctionalparallelMountCmdspecific-port4291606900/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-700256 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-700256 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (355.489648ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-700256 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-arm64 -p functional-700256 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-700256 /tmp/TestFunctionalparallelMountCmdspecific-port4291606900/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-arm64 -p functional-700256 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-700256 ssh "sudo umount -f /mount-9p": exit status 1 (361.179599ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-arm64 -p functional-700256 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-700256 /tmp/TestFunctionalparallelMountCmdspecific-port4291606900/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.19s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (2.85s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-700256 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4067367888/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-700256 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4067367888/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-700256 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4067367888/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-700256 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-700256 ssh "findmnt -T" /mount1: exit status 1 (911.380882ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-700256 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-700256 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-700256 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-arm64 mount -p functional-700256 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-700256 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4067367888/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-700256 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4067367888/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-700256 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4067367888/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (2.85s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2256: (dbg) Run:  out/minikube-linux-arm64 -p functional-700256 version --short
--- PASS: TestFunctional/parallel/Version/short (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (1.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2270: (dbg) Run:  out/minikube-linux-arm64 -p functional-700256 version -o=json --components
functional_test.go:2270: (dbg) Done: out/minikube-linux-arm64 -p functional-700256 version -o=json --components: (1.282665305s)
--- PASS: TestFunctional/parallel/Version/components (1.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-700256 image ls --format short --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-700256 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.31.0
registry.k8s.io/kube-proxy:v1.31.0
registry.k8s.io/kube-controller-manager:v1.31.0
registry.k8s.io/kube-apiserver:v1.31.0
registry.k8s.io/etcd:3.5.15-0
registry.k8s.io/echoserver-arm:1.8
registry.k8s.io/coredns/coredns:v1.11.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/minikube-local-cache-test:functional-700256
docker.io/kindest/kindnetd:v20240813-c6f155d6
docker.io/kindest/kindnetd:v20240730-75a5af0c
docker.io/kicbase/echo-server:functional-700256
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-700256 image ls --format short --alsologtostderr:
I0816 17:56:08.990420  329741 out.go:345] Setting OutFile to fd 1 ...
I0816 17:56:08.990578  329741 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0816 17:56:08.990584  329741 out.go:358] Setting ErrFile to fd 2...
I0816 17:56:08.990589  329741 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0816 17:56:08.990823  329741 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19461-287979/.minikube/bin
I0816 17:56:08.991447  329741 config.go:182] Loaded profile config "functional-700256": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.0
I0816 17:56:08.991557  329741 config.go:182] Loaded profile config "functional-700256": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.0
I0816 17:56:08.992061  329741 cli_runner.go:164] Run: docker container inspect functional-700256 --format={{.State.Status}}
I0816 17:56:09.022629  329741 ssh_runner.go:195] Run: systemctl --version
I0816 17:56:09.022684  329741 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-700256
I0816 17:56:09.046288  329741 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33155 SSHKeyPath:/home/jenkins/minikube-integration/19461-287979/.minikube/machines/functional-700256/id_rsa Username:docker}
I0816 17:56:09.145937  329741 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-700256 image ls --format table --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-700256 image ls --format table --alsologtostderr:
|---------------------------------------------|--------------------|---------------|--------|
|                    Image                    |        Tag         |   Image ID    |  Size  |
|---------------------------------------------|--------------------|---------------|--------|
| docker.io/kindest/kindnetd                  | v20240730-75a5af0c | sha256:d5e283 | 33.3MB |
| docker.io/library/nginx                     | alpine             | sha256:70594c | 19.6MB |
| docker.io/library/nginx                     | latest             | sha256:a9dfdb | 67.7MB |
| registry.k8s.io/kube-proxy                  | v1.31.0            | sha256:71d55d | 26.8MB |
| registry.k8s.io/kube-scheduler              | v1.31.0            | sha256:fbbbd4 | 18.5MB |
| docker.io/kindest/kindnetd                  | v20240813-c6f155d6 | sha256:6a23fa | 33.3MB |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc       | sha256:1611cd | 1.94MB |
| gcr.io/k8s-minikube/storage-provisioner     | v5                 | sha256:ba04bb | 8.03MB |
| registry.k8s.io/pause                       | 3.10               | sha256:afb617 | 268kB  |
| registry.k8s.io/pause                       | latest             | sha256:8cb209 | 71.3kB |
| docker.io/kicbase/echo-server               | functional-700256  | sha256:ce2d2c | 2.17MB |
| registry.k8s.io/kube-apiserver              | v1.31.0            | sha256:cd0f0a | 25.7MB |
| registry.k8s.io/pause                       | 3.3                | sha256:3d1873 | 249kB  |
| docker.io/library/minikube-local-cache-test | functional-700256  | sha256:a226eb | 991B   |
| registry.k8s.io/coredns/coredns             | v1.11.1            | sha256:2437cf | 16.5MB |
| registry.k8s.io/echoserver-arm              | 1.8                | sha256:72565b | 45.3MB |
| registry.k8s.io/etcd                        | 3.5.15-0           | sha256:27e383 | 66.5MB |
| registry.k8s.io/kube-controller-manager     | v1.31.0            | sha256:fcb068 | 23.9MB |
| registry.k8s.io/pause                       | 3.1                | sha256:8057e0 | 262kB  |
|---------------------------------------------|--------------------|---------------|--------|
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-700256 image ls --format table --alsologtostderr:
I0816 17:56:09.299421  329808 out.go:345] Setting OutFile to fd 1 ...
I0816 17:56:09.299642  329808 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0816 17:56:09.299665  329808 out.go:358] Setting ErrFile to fd 2...
I0816 17:56:09.299685  329808 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0816 17:56:09.299957  329808 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19461-287979/.minikube/bin
I0816 17:56:09.300685  329808 config.go:182] Loaded profile config "functional-700256": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.0
I0816 17:56:09.300855  329808 config.go:182] Loaded profile config "functional-700256": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.0
I0816 17:56:09.301400  329808 cli_runner.go:164] Run: docker container inspect functional-700256 --format={{.State.Status}}
I0816 17:56:09.325047  329808 ssh_runner.go:195] Run: systemctl --version
I0816 17:56:09.325114  329808 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-700256
I0816 17:56:09.347000  329808 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33155 SSHKeyPath:/home/jenkins/minikube-integration/19461-287979/.minikube/machines/functional-700256/id_rsa Username:docker}
I0816 17:56:09.441541  329808 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-700256 image ls --format json --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-700256 image ls --format json --alsologtostderr:
[{"id":"sha256:72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb","repoDigests":["registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5"],"repoTags":["registry.k8s.io/echoserver-arm:1.8"],"size":"45324675"},{"id":"sha256:cd0f0ae0ec9e0cdc092079156c122bf034ba3f24d31c1b1dd1b52a42ecf9b388","repoDigests":["registry.k8s.io/kube-apiserver@sha256:470179274deb9dc3a81df55cfc24823ce153147d4ebf2ed649a4f271f51eaddf"],"repoTags":["registry.k8s.io/kube-apiserver:v1.31.0"],"size":"25688321"},{"id":"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8","repoDigests":["registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"],"repoTags":["registry.k8s.io/pause:3.10"],"size":"267933"},{"id":"sha256:a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":
[],"size":"18306114"},{"id":"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93","repoDigests":["registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1"],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.1"],"size":"16482581"},{"id":"sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17","repoDigests":[],"repoTags":["docker.io/kicbase/echo-server:functional-700256"],"size":"2173567"},{"id":"sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"1935750"},{"id":"sha256:8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"262191"},{"id":"sha256:3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":[],"repoTags":["registry.k8
s.io/pause:3.3"],"size":"249461"},{"id":"sha256:8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"71300"},{"id":"sha256:d5e283bc63d431d0446af8b48a1618696def3b777347a97b8b3553d2c989c806","repoDigests":["docker.io/kindest/kindnetd@sha256:4067b91686869e19bac601aec305ba55d2e74cdcb91347869bfb4fd3a26cd3c3"],"repoTags":["docker.io/kindest/kindnetd:v20240730-75a5af0c"],"size":"33305789"},{"id":"sha256:a226eba26e290d171f99634ea4b88b26624b324dd4ec6deca88bd863b1bda7f5","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-700256"],"size":"991"},{"id":"sha256:20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"],"repoTags":[],"size":"74084559"},{"id":"sha256:a9dfdba8b719078c5705fdecd6f8315765cc79e473111aa9451551ddc340b2bc","repoDigests":["docker.io/library/nginx@sha256:447a8
665cc1dab95b1ca778e162215839ccbb9189104c79d7ec3a81e14577add"],"repoTags":["docker.io/library/nginx:latest"],"size":"67690150"},{"id":"sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"8034419"},{"id":"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da","repoDigests":["registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"],"repoTags":["registry.k8s.io/etcd:3.5.15-0"],"size":"66535646"},{"id":"sha256:fcb0683e6bdbd083710cf2d6fd7eb699c77fe4994c38a5c82d059e2e3cb4c2fd","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:f6f3c33dda209e8434b83dacf5244c03b59b0018d93325ff21296a142b68497d"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.31.0"],"size":"23947353"},{"id":"sha256:71d55d66fd4eec8986225089a135fadd96bc6624d987096808
772ce1e1924d89","repoDigests":["registry.k8s.io/kube-proxy@sha256:c727efb1c6f15a68060bf7f207f5c7a765355b7e3340c513e582ec819c5cd2fe"],"repoTags":["registry.k8s.io/kube-proxy:v1.31.0"],"size":"26752334"},{"id":"sha256:fbbbd428abb4dae52ab3018797d00d5840a739f0cc5697b662791831a60b0adb","repoDigests":["registry.k8s.io/kube-scheduler@sha256:96ddae9c9b2e79342e0551e2d2ec422c0c02629a74d928924aaa069706619808"],"repoTags":["registry.k8s.io/kube-scheduler:v1.31.0"],"size":"18505843"},{"id":"sha256:6a23fa8fd2b78ab58e42ba273808edc936a9c53d8ac4a919f6337be094843a51","repoDigests":["docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166"],"repoTags":["docker.io/kindest/kindnetd:v20240813-c6f155d6"],"size":"33309097"},{"id":"sha256:70594c812316a9bc20dd5d679982c6322dc7cf0128687ae9f849d0207783e753","repoDigests":["docker.io/library/nginx@sha256:c04c18adc2a407740a397c8407c011fc6c90026a9b65cceddef7ae5484360158"],"repoTags":["docker.io/library/nginx:alpine"],"size":"19627164"}]
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-700256 image ls --format json --alsologtostderr:
I0816 17:56:09.274545  329803 out.go:345] Setting OutFile to fd 1 ...
I0816 17:56:09.274782  329803 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0816 17:56:09.274792  329803 out.go:358] Setting ErrFile to fd 2...
I0816 17:56:09.274798  329803 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0816 17:56:09.275131  329803 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19461-287979/.minikube/bin
I0816 17:56:09.276270  329803 config.go:182] Loaded profile config "functional-700256": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.0
I0816 17:56:09.276490  329803 config.go:182] Loaded profile config "functional-700256": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.0
I0816 17:56:09.277186  329803 cli_runner.go:164] Run: docker container inspect functional-700256 --format={{.State.Status}}
I0816 17:56:09.309706  329803 ssh_runner.go:195] Run: systemctl --version
I0816 17:56:09.309775  329803 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-700256
I0816 17:56:09.348390  329803 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33155 SSHKeyPath:/home/jenkins/minikube-integration/19461-287979/.minikube/machines/functional-700256/id_rsa Username:docker}
I0816 17:56:09.446839  329803 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-700256 image ls --format yaml --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-700256 image ls --format yaml --alsologtostderr:
- id: sha256:a226eba26e290d171f99634ea4b88b26624b324dd4ec6deca88bd863b1bda7f5
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-700256
size: "991"
- id: sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "1935750"
- id: sha256:fbbbd428abb4dae52ab3018797d00d5840a739f0cc5697b662791831a60b0adb
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:96ddae9c9b2e79342e0551e2d2ec422c0c02629a74d928924aaa069706619808
repoTags:
- registry.k8s.io/kube-scheduler:v1.31.0
size: "18505843"
- id: sha256:8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "71300"
- id: sha256:a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "18306114"
- id: sha256:6a23fa8fd2b78ab58e42ba273808edc936a9c53d8ac4a919f6337be094843a51
repoDigests:
- docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166
repoTags:
- docker.io/kindest/kindnetd:v20240813-c6f155d6
size: "33309097"
- id: sha256:70594c812316a9bc20dd5d679982c6322dc7cf0128687ae9f849d0207783e753
repoDigests:
- docker.io/library/nginx@sha256:c04c18adc2a407740a397c8407c011fc6c90026a9b65cceddef7ae5484360158
repoTags:
- docker.io/library/nginx:alpine
size: "19627164"
- id: sha256:a9dfdba8b719078c5705fdecd6f8315765cc79e473111aa9451551ddc340b2bc
repoDigests:
- docker.io/library/nginx@sha256:447a8665cc1dab95b1ca778e162215839ccbb9189104c79d7ec3a81e14577add
repoTags:
- docker.io/library/nginx:latest
size: "67690150"
- id: sha256:3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "249461"
- id: sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "8034419"
- id: sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da
repoDigests:
- registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a
repoTags:
- registry.k8s.io/etcd:3.5.15-0
size: "66535646"
- id: sha256:cd0f0ae0ec9e0cdc092079156c122bf034ba3f24d31c1b1dd1b52a42ecf9b388
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:470179274deb9dc3a81df55cfc24823ce153147d4ebf2ed649a4f271f51eaddf
repoTags:
- registry.k8s.io/kube-apiserver:v1.31.0
size: "25688321"
- id: sha256:fcb0683e6bdbd083710cf2d6fd7eb699c77fe4994c38a5c82d059e2e3cb4c2fd
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:f6f3c33dda209e8434b83dacf5244c03b59b0018d93325ff21296a142b68497d
repoTags:
- registry.k8s.io/kube-controller-manager:v1.31.0
size: "23947353"
- id: sha256:71d55d66fd4eec8986225089a135fadd96bc6624d987096808772ce1e1924d89
repoDigests:
- registry.k8s.io/kube-proxy@sha256:c727efb1c6f15a68060bf7f207f5c7a765355b7e3340c513e582ec819c5cd2fe
repoTags:
- registry.k8s.io/kube-proxy:v1.31.0
size: "26752334"
- id: sha256:8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "262191"
- id: sha256:20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
repoTags: []
size: "74084559"
- id: sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17
repoDigests: []
repoTags:
- docker.io/kicbase/echo-server:functional-700256
size: "2173567"
- id: sha256:d5e283bc63d431d0446af8b48a1618696def3b777347a97b8b3553d2c989c806
repoDigests:
- docker.io/kindest/kindnetd@sha256:4067b91686869e19bac601aec305ba55d2e74cdcb91347869bfb4fd3a26cd3c3
repoTags:
- docker.io/kindest/kindnetd:v20240730-75a5af0c
size: "33305789"
- id: sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.1
size: "16482581"
- id: sha256:72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb
repoDigests:
- registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5
repoTags:
- registry.k8s.io/echoserver-arm:1.8
size: "45324675"
- id: sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8
repoDigests:
- registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a
repoTags:
- registry.k8s.io/pause:3.10
size: "267933"

                                                
                                                
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-700256 image ls --format yaml --alsologtostderr:
I0816 17:56:09.010207  329742 out.go:345] Setting OutFile to fd 1 ...
I0816 17:56:09.010534  329742 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0816 17:56:09.010563  329742 out.go:358] Setting ErrFile to fd 2...
I0816 17:56:09.010581  329742 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0816 17:56:09.010879  329742 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19461-287979/.minikube/bin
I0816 17:56:09.011600  329742 config.go:182] Loaded profile config "functional-700256": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.0
I0816 17:56:09.011895  329742 config.go:182] Loaded profile config "functional-700256": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.0
I0816 17:56:09.012542  329742 cli_runner.go:164] Run: docker container inspect functional-700256 --format={{.State.Status}}
I0816 17:56:09.036149  329742 ssh_runner.go:195] Run: systemctl --version
I0816 17:56:09.036206  329742 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-700256
I0816 17:56:09.057734  329742 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33155 SSHKeyPath:/home/jenkins/minikube-integration/19461-287979/.minikube/machines/functional-700256/id_rsa Username:docker}
I0816 17:56:09.151125  329742 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (2.88s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:308: (dbg) Run:  out/minikube-linux-arm64 -p functional-700256 ssh pgrep buildkitd
functional_test.go:308: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-700256 ssh pgrep buildkitd: exit status 1 (258.346941ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:315: (dbg) Run:  out/minikube-linux-arm64 -p functional-700256 image build -t localhost/my-image:functional-700256 testdata/build --alsologtostderr
functional_test.go:315: (dbg) Done: out/minikube-linux-arm64 -p functional-700256 image build -t localhost/my-image:functional-700256 testdata/build --alsologtostderr: (2.388732263s)
functional_test.go:323: (dbg) Stderr: out/minikube-linux-arm64 -p functional-700256 image build -t localhost/my-image:functional-700256 testdata/build --alsologtostderr:
I0816 17:56:09.798762  329928 out.go:345] Setting OutFile to fd 1 ...
I0816 17:56:09.799452  329928 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0816 17:56:09.799490  329928 out.go:358] Setting ErrFile to fd 2...
I0816 17:56:09.799511  329928 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0816 17:56:09.799785  329928 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19461-287979/.minikube/bin
I0816 17:56:09.800432  329928 config.go:182] Loaded profile config "functional-700256": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.0
I0816 17:56:09.801626  329928 config.go:182] Loaded profile config "functional-700256": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.0
I0816 17:56:09.802147  329928 cli_runner.go:164] Run: docker container inspect functional-700256 --format={{.State.Status}}
I0816 17:56:09.820147  329928 ssh_runner.go:195] Run: systemctl --version
I0816 17:56:09.820211  329928 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-700256
I0816 17:56:09.836975  329928 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33155 SSHKeyPath:/home/jenkins/minikube-integration/19461-287979/.minikube/machines/functional-700256/id_rsa Username:docker}
I0816 17:56:09.925373  329928 build_images.go:161] Building image from path: /tmp/build.2551483830.tar
I0816 17:56:09.925442  329928 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0816 17:56:09.934426  329928 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.2551483830.tar
I0816 17:56:09.937931  329928 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.2551483830.tar: stat -c "%s %y" /var/lib/minikube/build/build.2551483830.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.2551483830.tar': No such file or directory
I0816 17:56:09.937960  329928 ssh_runner.go:362] scp /tmp/build.2551483830.tar --> /var/lib/minikube/build/build.2551483830.tar (3072 bytes)
I0816 17:56:09.962760  329928 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.2551483830
I0816 17:56:09.972263  329928 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.2551483830 -xf /var/lib/minikube/build/build.2551483830.tar
I0816 17:56:09.981254  329928 containerd.go:394] Building image: /var/lib/minikube/build/build.2551483830
I0816 17:56:09.981330  329928 ssh_runner.go:195] Run: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.2551483830 --local dockerfile=/var/lib/minikube/build/build.2551483830 --output type=image,name=localhost/my-image:functional-700256
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 0.7s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.0s done
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 828.50kB / 828.50kB 0.2s
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 828.50kB / 828.50kB 0.2s done
#5 extracting sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0.1s done
#5 DONE 0.3s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.7s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.0s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.1s done
#8 exporting manifest sha256:169667913bd1a7fed53cf6cbb63086a6494b07ddde0a897144421dd42f071598 0.0s done
#8 exporting config sha256:881263cd52da19571c86c14ea6d2ce6919f7e75765892838e2992f81fab88bd7 0.0s done
#8 naming to localhost/my-image:functional-700256 done
#8 DONE 0.1s
I0816 17:56:12.113572  329928 ssh_runner.go:235] Completed: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.2551483830 --local dockerfile=/var/lib/minikube/build/build.2551483830 --output type=image,name=localhost/my-image:functional-700256: (2.1322134s)
I0816 17:56:12.113655  329928 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.2551483830
I0816 17:56:12.123492  329928 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.2551483830.tar
I0816 17:56:12.135351  329928 build_images.go:217] Built localhost/my-image:functional-700256 from /tmp/build.2551483830.tar
I0816 17:56:12.135425  329928 build_images.go:133] succeeded building to: functional-700256
I0816 17:56:12.135444  329928 build_images.go:134] failed building to: 
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-700256 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (2.88s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:342: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:347: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-700256
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.66s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:355: (dbg) Run:  out/minikube-linux-arm64 -p functional-700256 image load --daemon kicbase/echo-server:functional-700256 --alsologtostderr
2024/08/16 17:56:02 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:355: (dbg) Done: out/minikube-linux-arm64 -p functional-700256 image load --daemon kicbase/echo-server:functional-700256 --alsologtostderr: (1.061372761s)
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-700256 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.34s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2119: (dbg) Run:  out/minikube-linux-arm64 -p functional-700256 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2119: (dbg) Run:  out/minikube-linux-arm64 -p functional-700256 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2119: (dbg) Run:  out/minikube-linux-arm64 -p functional-700256 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:365: (dbg) Run:  out/minikube-linux-arm64 -p functional-700256 image load --daemon kicbase/echo-server:functional-700256 --alsologtostderr
functional_test.go:365: (dbg) Done: out/minikube-linux-arm64 -p functional-700256 image load --daemon kicbase/echo-server:functional-700256 --alsologtostderr: (1.048656034s)
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-700256 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.33s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:235: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:240: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-700256
functional_test.go:245: (dbg) Run:  out/minikube-linux-arm64 -p functional-700256 image load --daemon kicbase/echo-server:functional-700256 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-700256 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.50s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:380: (dbg) Run:  out/minikube-linux-arm64 -p functional-700256 image save kicbase/echo-server:functional-700256 /home/jenkins/workspace/Docker_Linux_containerd_arm64/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:392: (dbg) Run:  out/minikube-linux-arm64 -p functional-700256 image rm kicbase/echo-server:functional-700256 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-700256 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.54s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:409: (dbg) Run:  out/minikube-linux-arm64 -p functional-700256 image load /home/jenkins/workspace/Docker_Linux_containerd_arm64/echo-server-save.tar --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-700256 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.65s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:419: (dbg) Run:  docker rmi kicbase/echo-server:functional-700256
functional_test.go:424: (dbg) Run:  out/minikube-linux-arm64 -p functional-700256 image save --daemon kicbase/echo-server:functional-700256 --alsologtostderr
functional_test.go:432: (dbg) Run:  docker image inspect kicbase/echo-server:functional-700256
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.40s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-700256
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:198: (dbg) Run:  docker rmi -f localhost/my-image:functional-700256
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:206: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-700256
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (117.67s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-arm64 start -p ha-614290 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=containerd
E0816 17:56:16.361537  293371 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-287979/.minikube/profiles/addons-864899/client.crt: no such file or directory" logger="UnhandledError"
E0816 17:56:18.923056  293371 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-287979/.minikube/profiles/addons-864899/client.crt: no such file or directory" logger="UnhandledError"
E0816 17:56:24.044597  293371 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-287979/.minikube/profiles/addons-864899/client.crt: no such file or directory" logger="UnhandledError"
E0816 17:56:34.286657  293371 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-287979/.minikube/profiles/addons-864899/client.crt: no such file or directory" logger="UnhandledError"
E0816 17:56:54.768082  293371 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-287979/.minikube/profiles/addons-864899/client.crt: no such file or directory" logger="UnhandledError"
E0816 17:57:35.730024  293371 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-287979/.minikube/profiles/addons-864899/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:101: (dbg) Done: out/minikube-linux-arm64 start -p ha-614290 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=containerd: (1m56.834957617s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-arm64 -p ha-614290 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (117.67s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (31.25s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-614290 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-614290 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-arm64 kubectl -p ha-614290 -- rollout status deployment/busybox: (28.300649162s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-614290 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-614290 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-614290 -- exec busybox-7dff88458-9kps6 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-614290 -- exec busybox-7dff88458-bw268 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-614290 -- exec busybox-7dff88458-tv2gg -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-614290 -- exec busybox-7dff88458-9kps6 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-614290 -- exec busybox-7dff88458-bw268 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-614290 -- exec busybox-7dff88458-tv2gg -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-614290 -- exec busybox-7dff88458-9kps6 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-614290 -- exec busybox-7dff88458-bw268 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-614290 -- exec busybox-7dff88458-tv2gg -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (31.25s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.66s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-614290 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-614290 -- exec busybox-7dff88458-9kps6 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-614290 -- exec busybox-7dff88458-9kps6 -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-614290 -- exec busybox-7dff88458-bw268 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-614290 -- exec busybox-7dff88458-bw268 -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-614290 -- exec busybox-7dff88458-tv2gg -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-614290 -- exec busybox-7dff88458-tv2gg -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.66s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (23.91s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-arm64 node add -p ha-614290 -v=7 --alsologtostderr
E0816 17:58:57.651618  293371 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-287979/.minikube/profiles/addons-864899/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:228: (dbg) Done: out/minikube-linux-arm64 node add -p ha-614290 -v=7 --alsologtostderr: (22.936830211s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-arm64 -p ha-614290 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (23.91s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.13s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-614290 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.13s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.82s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.82s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (18.76s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-linux-arm64 -p ha-614290 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-614290 cp testdata/cp-test.txt ha-614290:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-614290 ssh -n ha-614290 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-614290 cp ha-614290:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile78386515/001/cp-test_ha-614290.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-614290 ssh -n ha-614290 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-614290 cp ha-614290:/home/docker/cp-test.txt ha-614290-m02:/home/docker/cp-test_ha-614290_ha-614290-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-614290 ssh -n ha-614290 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-614290 ssh -n ha-614290-m02 "sudo cat /home/docker/cp-test_ha-614290_ha-614290-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-614290 cp ha-614290:/home/docker/cp-test.txt ha-614290-m03:/home/docker/cp-test_ha-614290_ha-614290-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-614290 ssh -n ha-614290 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-614290 ssh -n ha-614290-m03 "sudo cat /home/docker/cp-test_ha-614290_ha-614290-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-614290 cp ha-614290:/home/docker/cp-test.txt ha-614290-m04:/home/docker/cp-test_ha-614290_ha-614290-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-614290 ssh -n ha-614290 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-614290 ssh -n ha-614290-m04 "sudo cat /home/docker/cp-test_ha-614290_ha-614290-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-614290 cp testdata/cp-test.txt ha-614290-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-614290 ssh -n ha-614290-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-614290 cp ha-614290-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile78386515/001/cp-test_ha-614290-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-614290 ssh -n ha-614290-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-614290 cp ha-614290-m02:/home/docker/cp-test.txt ha-614290:/home/docker/cp-test_ha-614290-m02_ha-614290.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-614290 ssh -n ha-614290-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-614290 ssh -n ha-614290 "sudo cat /home/docker/cp-test_ha-614290-m02_ha-614290.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-614290 cp ha-614290-m02:/home/docker/cp-test.txt ha-614290-m03:/home/docker/cp-test_ha-614290-m02_ha-614290-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-614290 ssh -n ha-614290-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-614290 ssh -n ha-614290-m03 "sudo cat /home/docker/cp-test_ha-614290-m02_ha-614290-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-614290 cp ha-614290-m02:/home/docker/cp-test.txt ha-614290-m04:/home/docker/cp-test_ha-614290-m02_ha-614290-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-614290 ssh -n ha-614290-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-614290 ssh -n ha-614290-m04 "sudo cat /home/docker/cp-test_ha-614290-m02_ha-614290-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-614290 cp testdata/cp-test.txt ha-614290-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-614290 ssh -n ha-614290-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-614290 cp ha-614290-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile78386515/001/cp-test_ha-614290-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-614290 ssh -n ha-614290-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-614290 cp ha-614290-m03:/home/docker/cp-test.txt ha-614290:/home/docker/cp-test_ha-614290-m03_ha-614290.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-614290 ssh -n ha-614290-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-614290 ssh -n ha-614290 "sudo cat /home/docker/cp-test_ha-614290-m03_ha-614290.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-614290 cp ha-614290-m03:/home/docker/cp-test.txt ha-614290-m02:/home/docker/cp-test_ha-614290-m03_ha-614290-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-614290 ssh -n ha-614290-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-614290 ssh -n ha-614290-m02 "sudo cat /home/docker/cp-test_ha-614290-m03_ha-614290-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-614290 cp ha-614290-m03:/home/docker/cp-test.txt ha-614290-m04:/home/docker/cp-test_ha-614290-m03_ha-614290-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-614290 ssh -n ha-614290-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-614290 ssh -n ha-614290-m04 "sudo cat /home/docker/cp-test_ha-614290-m03_ha-614290-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-614290 cp testdata/cp-test.txt ha-614290-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-614290 ssh -n ha-614290-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-614290 cp ha-614290-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile78386515/001/cp-test_ha-614290-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-614290 ssh -n ha-614290-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-614290 cp ha-614290-m04:/home/docker/cp-test.txt ha-614290:/home/docker/cp-test_ha-614290-m04_ha-614290.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-614290 ssh -n ha-614290-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-614290 ssh -n ha-614290 "sudo cat /home/docker/cp-test_ha-614290-m04_ha-614290.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-614290 cp ha-614290-m04:/home/docker/cp-test.txt ha-614290-m02:/home/docker/cp-test_ha-614290-m04_ha-614290-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-614290 ssh -n ha-614290-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-614290 ssh -n ha-614290-m02 "sudo cat /home/docker/cp-test_ha-614290-m04_ha-614290-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-614290 cp ha-614290-m04:/home/docker/cp-test.txt ha-614290-m03:/home/docker/cp-test_ha-614290-m04_ha-614290-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-614290 ssh -n ha-614290-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-614290 ssh -n ha-614290-m03 "sudo cat /home/docker/cp-test_ha-614290-m04_ha-614290-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (18.76s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (12.87s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-linux-arm64 -p ha-614290 node stop m02 -v=7 --alsologtostderr
ha_test.go:363: (dbg) Done: out/minikube-linux-arm64 -p ha-614290 node stop m02 -v=7 --alsologtostderr: (12.095802419s)
ha_test.go:369: (dbg) Run:  out/minikube-linux-arm64 -p ha-614290 status -v=7 --alsologtostderr
ha_test.go:369: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-614290 status -v=7 --alsologtostderr: exit status 7 (775.676585ms)

                                                
                                                
-- stdout --
	ha-614290
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-614290-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-614290-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-614290-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0816 17:59:41.560573  346193 out.go:345] Setting OutFile to fd 1 ...
	I0816 17:59:41.560689  346193 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 17:59:41.560699  346193 out.go:358] Setting ErrFile to fd 2...
	I0816 17:59:41.560705  346193 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 17:59:41.560973  346193 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19461-287979/.minikube/bin
	I0816 17:59:41.561522  346193 out.go:352] Setting JSON to false
	I0816 17:59:41.561560  346193 mustload.go:65] Loading cluster: ha-614290
	I0816 17:59:41.561652  346193 notify.go:220] Checking for updates...
	I0816 17:59:41.562044  346193 config.go:182] Loaded profile config "ha-614290": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.0
	I0816 17:59:41.562063  346193 status.go:255] checking status of ha-614290 ...
	I0816 17:59:41.562590  346193 cli_runner.go:164] Run: docker container inspect ha-614290 --format={{.State.Status}}
	I0816 17:59:41.582936  346193 status.go:330] ha-614290 host status = "Running" (err=<nil>)
	I0816 17:59:41.582964  346193 host.go:66] Checking if "ha-614290" exists ...
	I0816 17:59:41.583274  346193 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-614290
	I0816 17:59:41.626520  346193 host.go:66] Checking if "ha-614290" exists ...
	I0816 17:59:41.626829  346193 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0816 17:59:41.626884  346193 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-614290
	I0816 17:59:41.649212  346193 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33160 SSHKeyPath:/home/jenkins/minikube-integration/19461-287979/.minikube/machines/ha-614290/id_rsa Username:docker}
	I0816 17:59:41.747302  346193 ssh_runner.go:195] Run: systemctl --version
	I0816 17:59:41.753711  346193 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0816 17:59:41.765783  346193 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0816 17:59:41.832561  346193 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:55 OomKillDisable:true NGoroutines:71 SystemTime:2024-08-16 17:59:41.822589651 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1067-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214900736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8fc6bcff51318944179630522a095cc9dbf9f353 Expected:8fc6bcff51318944179630522a095cc9dbf9f353} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1]] Warnings:<nil>}}
	I0816 17:59:41.833253  346193 kubeconfig.go:125] found "ha-614290" server: "https://192.168.49.254:8443"
	I0816 17:59:41.833289  346193 api_server.go:166] Checking apiserver status ...
	I0816 17:59:41.833335  346193 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 17:59:41.846014  346193 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1464/cgroup
	I0816 17:59:41.856628  346193 api_server.go:182] apiserver freezer: "3:freezer:/docker/5c611cce07551604309b85d0af8c068ed749dfda4029dda366174cbaa258b7ea/kubepods/burstable/pod5397dd2731aa45d44406f3905c44d8a4/39eee70a9b6f132d0a868fd7e82a9b5e180ea58f445b949958993949afa94d28"
	I0816 17:59:41.856704  346193 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/5c611cce07551604309b85d0af8c068ed749dfda4029dda366174cbaa258b7ea/kubepods/burstable/pod5397dd2731aa45d44406f3905c44d8a4/39eee70a9b6f132d0a868fd7e82a9b5e180ea58f445b949958993949afa94d28/freezer.state
	I0816 17:59:41.866427  346193 api_server.go:204] freezer state: "THAWED"
	I0816 17:59:41.866454  346193 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0816 17:59:41.874599  346193 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0816 17:59:41.874626  346193 status.go:422] ha-614290 apiserver status = Running (err=<nil>)
	I0816 17:59:41.874638  346193 status.go:257] ha-614290 status: &{Name:ha-614290 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0816 17:59:41.874660  346193 status.go:255] checking status of ha-614290-m02 ...
	I0816 17:59:41.874973  346193 cli_runner.go:164] Run: docker container inspect ha-614290-m02 --format={{.State.Status}}
	I0816 17:59:41.891683  346193 status.go:330] ha-614290-m02 host status = "Stopped" (err=<nil>)
	I0816 17:59:41.891707  346193 status.go:343] host is not running, skipping remaining checks
	I0816 17:59:41.891714  346193 status.go:257] ha-614290-m02 status: &{Name:ha-614290-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0816 17:59:41.891734  346193 status.go:255] checking status of ha-614290-m03 ...
	I0816 17:59:41.892119  346193 cli_runner.go:164] Run: docker container inspect ha-614290-m03 --format={{.State.Status}}
	I0816 17:59:41.908258  346193 status.go:330] ha-614290-m03 host status = "Running" (err=<nil>)
	I0816 17:59:41.908287  346193 host.go:66] Checking if "ha-614290-m03" exists ...
	I0816 17:59:41.908590  346193 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-614290-m03
	I0816 17:59:41.925313  346193 host.go:66] Checking if "ha-614290-m03" exists ...
	I0816 17:59:41.925656  346193 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0816 17:59:41.925714  346193 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-614290-m03
	I0816 17:59:41.943759  346193 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33170 SSHKeyPath:/home/jenkins/minikube-integration/19461-287979/.minikube/machines/ha-614290-m03/id_rsa Username:docker}
	I0816 17:59:42.038798  346193 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0816 17:59:42.051198  346193 kubeconfig.go:125] found "ha-614290" server: "https://192.168.49.254:8443"
	I0816 17:59:42.051229  346193 api_server.go:166] Checking apiserver status ...
	I0816 17:59:42.051279  346193 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 17:59:42.063879  346193 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1421/cgroup
	I0816 17:59:42.075187  346193 api_server.go:182] apiserver freezer: "3:freezer:/docker/0f41edcbafcfbed1a43d49387d42c9ccc432b41108f42a06ce96cec23273bcf6/kubepods/burstable/pod89d16c27d025d57eb2d530df308b44a5/b4dcd707740d19f6247a82592dcc273bd0e435d080a14b61ba9bb6bcacdefb17"
	I0816 17:59:42.075274  346193 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/0f41edcbafcfbed1a43d49387d42c9ccc432b41108f42a06ce96cec23273bcf6/kubepods/burstable/pod89d16c27d025d57eb2d530df308b44a5/b4dcd707740d19f6247a82592dcc273bd0e435d080a14b61ba9bb6bcacdefb17/freezer.state
	I0816 17:59:42.086558  346193 api_server.go:204] freezer state: "THAWED"
	I0816 17:59:42.086595  346193 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0816 17:59:42.094916  346193 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0816 17:59:42.094950  346193 status.go:422] ha-614290-m03 apiserver status = Running (err=<nil>)
	I0816 17:59:42.094961  346193 status.go:257] ha-614290-m03 status: &{Name:ha-614290-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0816 17:59:42.094981  346193 status.go:255] checking status of ha-614290-m04 ...
	I0816 17:59:42.095342  346193 cli_runner.go:164] Run: docker container inspect ha-614290-m04 --format={{.State.Status}}
	I0816 17:59:42.116225  346193 status.go:330] ha-614290-m04 host status = "Running" (err=<nil>)
	I0816 17:59:42.116264  346193 host.go:66] Checking if "ha-614290-m04" exists ...
	I0816 17:59:42.116634  346193 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-614290-m04
	I0816 17:59:42.138874  346193 host.go:66] Checking if "ha-614290-m04" exists ...
	I0816 17:59:42.139321  346193 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0816 17:59:42.139389  346193 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-614290-m04
	I0816 17:59:42.169155  346193 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33175 SSHKeyPath:/home/jenkins/minikube-integration/19461-287979/.minikube/machines/ha-614290-m04/id_rsa Username:docker}
	I0816 17:59:42.267210  346193 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0816 17:59:42.280731  346193 status.go:257] ha-614290-m04 status: &{Name:ha-614290-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (12.87s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.57s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.57s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (19.45s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-linux-arm64 -p ha-614290 node start m02 -v=7 --alsologtostderr
ha_test.go:420: (dbg) Done: out/minikube-linux-arm64 -p ha-614290 node start m02 -v=7 --alsologtostderr: (17.763817775s)
ha_test.go:428: (dbg) Run:  out/minikube-linux-arm64 -p ha-614290 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Done: out/minikube-linux-arm64 -p ha-614290 status -v=7 --alsologtostderr: (1.563241446s)
ha_test.go:448: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (19.45s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.82s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.82s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (134.17s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-linux-arm64 node list -p ha-614290 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-linux-arm64 stop -p ha-614290 -v=7 --alsologtostderr
E0816 18:00:23.312048  293371 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-287979/.minikube/profiles/functional-700256/client.crt: no such file or directory" logger="UnhandledError"
E0816 18:00:23.318427  293371 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-287979/.minikube/profiles/functional-700256/client.crt: no such file or directory" logger="UnhandledError"
E0816 18:00:23.329816  293371 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-287979/.minikube/profiles/functional-700256/client.crt: no such file or directory" logger="UnhandledError"
E0816 18:00:23.351195  293371 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-287979/.minikube/profiles/functional-700256/client.crt: no such file or directory" logger="UnhandledError"
E0816 18:00:23.392583  293371 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-287979/.minikube/profiles/functional-700256/client.crt: no such file or directory" logger="UnhandledError"
E0816 18:00:23.473971  293371 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-287979/.minikube/profiles/functional-700256/client.crt: no such file or directory" logger="UnhandledError"
E0816 18:00:23.635442  293371 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-287979/.minikube/profiles/functional-700256/client.crt: no such file or directory" logger="UnhandledError"
E0816 18:00:23.957234  293371 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-287979/.minikube/profiles/functional-700256/client.crt: no such file or directory" logger="UnhandledError"
E0816 18:00:24.599233  293371 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-287979/.minikube/profiles/functional-700256/client.crt: no such file or directory" logger="UnhandledError"
E0816 18:00:25.880528  293371 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-287979/.minikube/profiles/functional-700256/client.crt: no such file or directory" logger="UnhandledError"
E0816 18:00:28.441807  293371 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-287979/.minikube/profiles/functional-700256/client.crt: no such file or directory" logger="UnhandledError"
E0816 18:00:33.563432  293371 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-287979/.minikube/profiles/functional-700256/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:462: (dbg) Done: out/minikube-linux-arm64 stop -p ha-614290 -v=7 --alsologtostderr: (37.171639065s)
ha_test.go:467: (dbg) Run:  out/minikube-linux-arm64 start -p ha-614290 --wait=true -v=7 --alsologtostderr
E0816 18:00:43.804770  293371 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-287979/.minikube/profiles/functional-700256/client.crt: no such file or directory" logger="UnhandledError"
E0816 18:01:04.286657  293371 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-287979/.minikube/profiles/functional-700256/client.crt: no such file or directory" logger="UnhandledError"
E0816 18:01:13.787881  293371 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-287979/.minikube/profiles/addons-864899/client.crt: no such file or directory" logger="UnhandledError"
E0816 18:01:41.493114  293371 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-287979/.minikube/profiles/addons-864899/client.crt: no such file or directory" logger="UnhandledError"
E0816 18:01:45.248718  293371 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-287979/.minikube/profiles/functional-700256/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:467: (dbg) Done: out/minikube-linux-arm64 start -p ha-614290 --wait=true -v=7 --alsologtostderr: (1m36.858482748s)
ha_test.go:472: (dbg) Run:  out/minikube-linux-arm64 node list -p ha-614290
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (134.17s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (10.55s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-linux-arm64 -p ha-614290 node delete m03 -v=7 --alsologtostderr
ha_test.go:487: (dbg) Done: out/minikube-linux-arm64 -p ha-614290 node delete m03 -v=7 --alsologtostderr: (9.621945244s)
ha_test.go:493: (dbg) Run:  out/minikube-linux-arm64 -p ha-614290 status -v=7 --alsologtostderr
ha_test.go:511: (dbg) Run:  kubectl get nodes
ha_test.go:519: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (10.55s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.56s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.56s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (36.09s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-linux-arm64 -p ha-614290 stop -v=7 --alsologtostderr
ha_test.go:531: (dbg) Done: out/minikube-linux-arm64 -p ha-614290 stop -v=7 --alsologtostderr: (35.979223627s)
ha_test.go:537: (dbg) Run:  out/minikube-linux-arm64 -p ha-614290 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-614290 status -v=7 --alsologtostderr: exit status 7 (111.717899ms)

                                                
                                                
-- stdout --
	ha-614290
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-614290-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-614290-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0816 18:03:04.423947  360422 out.go:345] Setting OutFile to fd 1 ...
	I0816 18:03:04.425304  360422 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 18:03:04.425360  360422 out.go:358] Setting ErrFile to fd 2...
	I0816 18:03:04.425381  360422 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 18:03:04.425659  360422 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19461-287979/.minikube/bin
	I0816 18:03:04.425910  360422 out.go:352] Setting JSON to false
	I0816 18:03:04.425978  360422 mustload.go:65] Loading cluster: ha-614290
	I0816 18:03:04.426069  360422 notify.go:220] Checking for updates...
	I0816 18:03:04.426512  360422 config.go:182] Loaded profile config "ha-614290": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.0
	I0816 18:03:04.426550  360422 status.go:255] checking status of ha-614290 ...
	I0816 18:03:04.427108  360422 cli_runner.go:164] Run: docker container inspect ha-614290 --format={{.State.Status}}
	I0816 18:03:04.445190  360422 status.go:330] ha-614290 host status = "Stopped" (err=<nil>)
	I0816 18:03:04.445210  360422 status.go:343] host is not running, skipping remaining checks
	I0816 18:03:04.445217  360422 status.go:257] ha-614290 status: &{Name:ha-614290 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0816 18:03:04.445241  360422 status.go:255] checking status of ha-614290-m02 ...
	I0816 18:03:04.445541  360422 cli_runner.go:164] Run: docker container inspect ha-614290-m02 --format={{.State.Status}}
	I0816 18:03:04.461795  360422 status.go:330] ha-614290-m02 host status = "Stopped" (err=<nil>)
	I0816 18:03:04.461817  360422 status.go:343] host is not running, skipping remaining checks
	I0816 18:03:04.461824  360422 status.go:257] ha-614290-m02 status: &{Name:ha-614290-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0816 18:03:04.461846  360422 status.go:255] checking status of ha-614290-m04 ...
	I0816 18:03:04.462151  360422 cli_runner.go:164] Run: docker container inspect ha-614290-m04 --format={{.State.Status}}
	I0816 18:03:04.489829  360422 status.go:330] ha-614290-m04 host status = "Stopped" (err=<nil>)
	I0816 18:03:04.489852  360422 status.go:343] host is not running, skipping remaining checks
	I0816 18:03:04.489859  360422 status.go:257] ha-614290-m04 status: &{Name:ha-614290-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (36.09s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (78.68s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-linux-arm64 start -p ha-614290 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=containerd
E0816 18:03:07.171056  293371 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-287979/.minikube/profiles/functional-700256/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:560: (dbg) Done: out/minikube-linux-arm64 start -p ha-614290 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=containerd: (1m17.712342257s)
ha_test.go:566: (dbg) Run:  out/minikube-linux-arm64 -p ha-614290 status -v=7 --alsologtostderr
ha_test.go:584: (dbg) Run:  kubectl get nodes
ha_test.go:592: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (78.68s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.58s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.58s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (41.29s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-linux-arm64 node add -p ha-614290 --control-plane -v=7 --alsologtostderr
ha_test.go:605: (dbg) Done: out/minikube-linux-arm64 node add -p ha-614290 --control-plane -v=7 --alsologtostderr: (40.289359778s)
ha_test.go:611: (dbg) Run:  out/minikube-linux-arm64 -p ha-614290 status -v=7 --alsologtostderr
ha_test.go:611: (dbg) Done: out/minikube-linux-arm64 -p ha-614290 status -v=7 --alsologtostderr: (1.000301932s)
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (41.29s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.74s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.74s)

                                                
                                    
x
+
TestJSONOutput/start/Command (52.48s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-442513 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=containerd
E0816 18:05:23.312076  293371 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-287979/.minikube/profiles/functional-700256/client.crt: no such file or directory" logger="UnhandledError"
E0816 18:05:51.012439  293371 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-287979/.minikube/profiles/functional-700256/client.crt: no such file or directory" logger="UnhandledError"
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 start -p json-output-442513 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=containerd: (52.472095185s)
--- PASS: TestJSONOutput/start/Command (52.48s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.73s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 pause -p json-output-442513 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.73s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.67s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 unpause -p json-output-442513 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.67s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.78s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 stop -p json-output-442513 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 stop -p json-output-442513 --output=json --user=testUser: (5.778971552s)
--- PASS: TestJSONOutput/stop/Command (5.78s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.21s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-error-793371 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p json-output-error-793371 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (77.26479ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"d5bf44ea-0c67-457f-abe9-9d60b0f5bd58","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-793371] minikube v1.33.1 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"036be31f-5bed-4555-aedb-e91444606466","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19461"}}
	{"specversion":"1.0","id":"1f770196-5d68-4837-a9fb-49919eae471b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"7b247162-1f2b-4aa6-9ce4-51e1b9c82c7e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19461-287979/kubeconfig"}}
	{"specversion":"1.0","id":"3e60b821-f29a-4727-800c-93e88eb29b03","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19461-287979/.minikube"}}
	{"specversion":"1.0","id":"0ea16866-a6ef-48d4-9551-2e12311caabf","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"a96346ca-6ca0-4d81-bb5e-f92ce40c82b0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"5dba3cdc-6bc6-4958-9466-44341f4313aa","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-793371" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p json-output-error-793371
--- PASS: TestErrorJSONOutput (0.21s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (44.48s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-598900 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-598900 --network=: (42.145387319s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-598900" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-598900
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-598900: (2.311338339s)
--- PASS: TestKicCustomNetwork/create_custom_network (44.48s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (33.79s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-790116 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-790116 --network=bridge: (31.813582427s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-790116" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-790116
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-790116: (1.956221955s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (33.79s)

                                                
                                    
x
+
TestKicExistingNetwork (33.41s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-arm64 start -p existing-network-406598 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-arm64 start -p existing-network-406598 --network=existing-network: (31.340311367s)
helpers_test.go:175: Cleaning up "existing-network-406598" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p existing-network-406598
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p existing-network-406598: (1.903430425s)
--- PASS: TestKicExistingNetwork (33.41s)

                                                
                                    
x
+
TestKicCustomSubnet (33.09s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-subnet-390811 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-subnet-390811 --subnet=192.168.60.0/24: (31.302152993s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-390811 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-390811" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p custom-subnet-390811
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p custom-subnet-390811: (1.772476967s)
--- PASS: TestKicCustomSubnet (33.09s)

                                                
                                    
x
+
TestKicStaticIP (36.24s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-arm64 start -p static-ip-425184 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-arm64 start -p static-ip-425184 --static-ip=192.168.200.200: (34.003345945s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-arm64 -p static-ip-425184 ip
helpers_test.go:175: Cleaning up "static-ip-425184" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p static-ip-425184
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p static-ip-425184: (2.089086028s)
--- PASS: TestKicStaticIP (36.24s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-arm64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (68.34s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p first-561954 --driver=docker  --container-runtime=containerd
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p first-561954 --driver=docker  --container-runtime=containerd: (30.517285937s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p second-564464 --driver=docker  --container-runtime=containerd
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p second-564464 --driver=docker  --container-runtime=containerd: (32.313496687s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile first-561954
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile second-564464
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
helpers_test.go:175: Cleaning up "second-564464" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p second-564464
E0816 18:10:23.311854  293371 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-287979/.minikube/profiles/functional-700256/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p second-564464: (1.95322512s)
helpers_test.go:175: Cleaning up "first-561954" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p first-561954
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p first-561954: (2.255749405s)
--- PASS: TestMinikubeProfile (68.34s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (5.98s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-1-517392 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-1-517392 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd: (4.984725993s)
--- PASS: TestMountStart/serial/StartWithMountFirst (5.98s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-1-517392 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.26s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (6.68s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-530241 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-530241 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd: (5.67542283s)
--- PASS: TestMountStart/serial/StartWithMountSecond (6.68s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-530241 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.27s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.61s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p mount-start-1-517392 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p mount-start-1-517392 --alsologtostderr -v=5: (1.614654063s)
--- PASS: TestMountStart/serial/DeleteFirst (1.61s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.3s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-530241 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.30s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.19s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-arm64 stop -p mount-start-2-530241
mount_start_test.go:155: (dbg) Done: out/minikube-linux-arm64 stop -p mount-start-2-530241: (1.189300817s)
--- PASS: TestMountStart/serial/Stop (1.19s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (7.27s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-530241
mount_start_test.go:166: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-530241: (6.269334329s)
--- PASS: TestMountStart/serial/RestartStopped (7.27s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-530241 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.26s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (65.23s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-126917 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd
E0816 18:11:13.788702  293371 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-287979/.minikube/profiles/addons-864899/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:96: (dbg) Done: out/minikube-linux-arm64 start -p multinode-126917 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd: (1m4.735425366s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-arm64 -p multinode-126917 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (65.23s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (15.05s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-126917 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-126917 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-arm64 kubectl -p multinode-126917 -- rollout status deployment/busybox: (13.284393907s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-126917 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-126917 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-126917 -- exec busybox-7dff88458-4wr6c -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-126917 -- exec busybox-7dff88458-xnmg8 -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-126917 -- exec busybox-7dff88458-4wr6c -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-126917 -- exec busybox-7dff88458-xnmg8 -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-126917 -- exec busybox-7dff88458-4wr6c -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-126917 -- exec busybox-7dff88458-xnmg8 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (15.05s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (1s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-126917 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-126917 -- exec busybox-7dff88458-4wr6c -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-126917 -- exec busybox-7dff88458-4wr6c -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-126917 -- exec busybox-7dff88458-xnmg8 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-126917 -- exec busybox-7dff88458-xnmg8 -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (1.00s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (18.94s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-126917 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-arm64 node add -p multinode-126917 -v 3 --alsologtostderr: (18.274853934s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-arm64 -p multinode-126917 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (18.94s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-126917 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.10s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.32s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.32s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (9.96s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-arm64 -p multinode-126917 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-126917 cp testdata/cp-test.txt multinode-126917:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-126917 ssh -n multinode-126917 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-126917 cp multinode-126917:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile793070930/001/cp-test_multinode-126917.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-126917 ssh -n multinode-126917 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-126917 cp multinode-126917:/home/docker/cp-test.txt multinode-126917-m02:/home/docker/cp-test_multinode-126917_multinode-126917-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-126917 ssh -n multinode-126917 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-126917 ssh -n multinode-126917-m02 "sudo cat /home/docker/cp-test_multinode-126917_multinode-126917-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-126917 cp multinode-126917:/home/docker/cp-test.txt multinode-126917-m03:/home/docker/cp-test_multinode-126917_multinode-126917-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-126917 ssh -n multinode-126917 "sudo cat /home/docker/cp-test.txt"
E0816 18:12:36.854538  293371 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-287979/.minikube/profiles/addons-864899/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-126917 ssh -n multinode-126917-m03 "sudo cat /home/docker/cp-test_multinode-126917_multinode-126917-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-126917 cp testdata/cp-test.txt multinode-126917-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-126917 ssh -n multinode-126917-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-126917 cp multinode-126917-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile793070930/001/cp-test_multinode-126917-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-126917 ssh -n multinode-126917-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-126917 cp multinode-126917-m02:/home/docker/cp-test.txt multinode-126917:/home/docker/cp-test_multinode-126917-m02_multinode-126917.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-126917 ssh -n multinode-126917-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-126917 ssh -n multinode-126917 "sudo cat /home/docker/cp-test_multinode-126917-m02_multinode-126917.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-126917 cp multinode-126917-m02:/home/docker/cp-test.txt multinode-126917-m03:/home/docker/cp-test_multinode-126917-m02_multinode-126917-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-126917 ssh -n multinode-126917-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-126917 ssh -n multinode-126917-m03 "sudo cat /home/docker/cp-test_multinode-126917-m02_multinode-126917-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-126917 cp testdata/cp-test.txt multinode-126917-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-126917 ssh -n multinode-126917-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-126917 cp multinode-126917-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile793070930/001/cp-test_multinode-126917-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-126917 ssh -n multinode-126917-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-126917 cp multinode-126917-m03:/home/docker/cp-test.txt multinode-126917:/home/docker/cp-test_multinode-126917-m03_multinode-126917.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-126917 ssh -n multinode-126917-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-126917 ssh -n multinode-126917 "sudo cat /home/docker/cp-test_multinode-126917-m03_multinode-126917.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-126917 cp multinode-126917-m03:/home/docker/cp-test.txt multinode-126917-m02:/home/docker/cp-test_multinode-126917-m03_multinode-126917-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-126917 ssh -n multinode-126917-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-126917 ssh -n multinode-126917-m02 "sudo cat /home/docker/cp-test_multinode-126917-m03_multinode-126917-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (9.96s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.37s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-arm64 -p multinode-126917 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-arm64 -p multinode-126917 node stop m03: (1.234128121s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-arm64 -p multinode-126917 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-126917 status: exit status 7 (591.899802ms)

                                                
                                                
-- stdout --
	multinode-126917
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-126917-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-126917-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p multinode-126917 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-126917 status --alsologtostderr: exit status 7 (539.359247ms)

                                                
                                                
-- stdout --
	multinode-126917
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-126917-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-126917-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0816 18:12:45.301462  413972 out.go:345] Setting OutFile to fd 1 ...
	I0816 18:12:45.301671  413972 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 18:12:45.301708  413972 out.go:358] Setting ErrFile to fd 2...
	I0816 18:12:45.301736  413972 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 18:12:45.302038  413972 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19461-287979/.minikube/bin
	I0816 18:12:45.302286  413972 out.go:352] Setting JSON to false
	I0816 18:12:45.302359  413972 mustload.go:65] Loading cluster: multinode-126917
	I0816 18:12:45.302458  413972 notify.go:220] Checking for updates...
	I0816 18:12:45.302901  413972 config.go:182] Loaded profile config "multinode-126917": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.0
	I0816 18:12:45.302943  413972 status.go:255] checking status of multinode-126917 ...
	I0816 18:12:45.303614  413972 cli_runner.go:164] Run: docker container inspect multinode-126917 --format={{.State.Status}}
	I0816 18:12:45.328572  413972 status.go:330] multinode-126917 host status = "Running" (err=<nil>)
	I0816 18:12:45.328600  413972 host.go:66] Checking if "multinode-126917" exists ...
	I0816 18:12:45.329030  413972 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-126917
	I0816 18:12:45.356251  413972 host.go:66] Checking if "multinode-126917" exists ...
	I0816 18:12:45.356643  413972 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0816 18:12:45.356705  413972 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-126917
	I0816 18:12:45.378083  413972 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33280 SSHKeyPath:/home/jenkins/minikube-integration/19461-287979/.minikube/machines/multinode-126917/id_rsa Username:docker}
	I0816 18:12:45.474498  413972 ssh_runner.go:195] Run: systemctl --version
	I0816 18:12:45.479032  413972 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0816 18:12:45.491103  413972 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0816 18:12:45.546477  413972 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:41 OomKillDisable:true NGoroutines:61 SystemTime:2024-08-16 18:12:45.536221299 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1067-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214900736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8fc6bcff51318944179630522a095cc9dbf9f353 Expected:8fc6bcff51318944179630522a095cc9dbf9f353} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1]] Warnings:<nil>}}
	I0816 18:12:45.547138  413972 kubeconfig.go:125] found "multinode-126917" server: "https://192.168.67.2:8443"
	I0816 18:12:45.547174  413972 api_server.go:166] Checking apiserver status ...
	I0816 18:12:45.547224  413972 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 18:12:45.559058  413972 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1360/cgroup
	I0816 18:12:45.568935  413972 api_server.go:182] apiserver freezer: "3:freezer:/docker/c5e5b52098aeb6a46deaf607d7cb17a28151b2216ff8791c9b5245c1a912d57e/kubepods/burstable/pod0535258a0751a9942c8f0d599e1209ce/863373899b47f50af59ef7b167ec330aae1ea58ebc58f978776bd4a24e99d36f"
	I0816 18:12:45.569039  413972 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/c5e5b52098aeb6a46deaf607d7cb17a28151b2216ff8791c9b5245c1a912d57e/kubepods/burstable/pod0535258a0751a9942c8f0d599e1209ce/863373899b47f50af59ef7b167ec330aae1ea58ebc58f978776bd4a24e99d36f/freezer.state
	I0816 18:12:45.577961  413972 api_server.go:204] freezer state: "THAWED"
	I0816 18:12:45.577987  413972 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0816 18:12:45.585537  413972 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I0816 18:12:45.585566  413972 status.go:422] multinode-126917 apiserver status = Running (err=<nil>)
	I0816 18:12:45.585579  413972 status.go:257] multinode-126917 status: &{Name:multinode-126917 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0816 18:12:45.585601  413972 status.go:255] checking status of multinode-126917-m02 ...
	I0816 18:12:45.585902  413972 cli_runner.go:164] Run: docker container inspect multinode-126917-m02 --format={{.State.Status}}
	I0816 18:12:45.602332  413972 status.go:330] multinode-126917-m02 host status = "Running" (err=<nil>)
	I0816 18:12:45.602359  413972 host.go:66] Checking if "multinode-126917-m02" exists ...
	I0816 18:12:45.602673  413972 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-126917-m02
	I0816 18:12:45.618257  413972 host.go:66] Checking if "multinode-126917-m02" exists ...
	I0816 18:12:45.618594  413972 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0816 18:12:45.618642  413972 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-126917-m02
	I0816 18:12:45.635015  413972 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33285 SSHKeyPath:/home/jenkins/minikube-integration/19461-287979/.minikube/machines/multinode-126917-m02/id_rsa Username:docker}
	I0816 18:12:45.726201  413972 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0816 18:12:45.738011  413972 status.go:257] multinode-126917-m02 status: &{Name:multinode-126917-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0816 18:12:45.738047  413972 status.go:255] checking status of multinode-126917-m03 ...
	I0816 18:12:45.738414  413972 cli_runner.go:164] Run: docker container inspect multinode-126917-m03 --format={{.State.Status}}
	I0816 18:12:45.767735  413972 status.go:330] multinode-126917-m03 host status = "Stopped" (err=<nil>)
	I0816 18:12:45.767759  413972 status.go:343] host is not running, skipping remaining checks
	I0816 18:12:45.767767  413972 status.go:257] multinode-126917-m03 status: &{Name:multinode-126917-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.37s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (9.34s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-arm64 -p multinode-126917 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-arm64 -p multinode-126917 node start m03 -v=7 --alsologtostderr: (8.566697711s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-arm64 -p multinode-126917 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (9.34s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (93.24s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-126917
multinode_test.go:321: (dbg) Run:  out/minikube-linux-arm64 stop -p multinode-126917
multinode_test.go:321: (dbg) Done: out/minikube-linux-arm64 stop -p multinode-126917: (24.983816s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-126917 --wait=true -v=8 --alsologtostderr
multinode_test.go:326: (dbg) Done: out/minikube-linux-arm64 start -p multinode-126917 --wait=true -v=8 --alsologtostderr: (1m8.140733269s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-126917
--- PASS: TestMultiNode/serial/RestartKeepsNodes (93.24s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.52s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-arm64 -p multinode-126917 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-arm64 -p multinode-126917 node delete m03: (4.85714634s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p multinode-126917 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.52s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (23.96s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-arm64 -p multinode-126917 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-arm64 -p multinode-126917 stop: (23.789359756s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-arm64 -p multinode-126917 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-126917 status: exit status 7 (83.569434ms)

                                                
                                                
-- stdout --
	multinode-126917
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-126917-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-arm64 -p multinode-126917 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-126917 status --alsologtostderr: exit status 7 (84.106213ms)

                                                
                                                
-- stdout --
	multinode-126917
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-126917-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0816 18:14:57.797504  422436 out.go:345] Setting OutFile to fd 1 ...
	I0816 18:14:57.797716  422436 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 18:14:57.797748  422436 out.go:358] Setting ErrFile to fd 2...
	I0816 18:14:57.797774  422436 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 18:14:57.798004  422436 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19461-287979/.minikube/bin
	I0816 18:14:57.798206  422436 out.go:352] Setting JSON to false
	I0816 18:14:57.798272  422436 mustload.go:65] Loading cluster: multinode-126917
	I0816 18:14:57.798342  422436 notify.go:220] Checking for updates...
	I0816 18:14:57.798711  422436 config.go:182] Loaded profile config "multinode-126917": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.0
	I0816 18:14:57.798743  422436 status.go:255] checking status of multinode-126917 ...
	I0816 18:14:57.799538  422436 cli_runner.go:164] Run: docker container inspect multinode-126917 --format={{.State.Status}}
	I0816 18:14:57.816540  422436 status.go:330] multinode-126917 host status = "Stopped" (err=<nil>)
	I0816 18:14:57.816561  422436 status.go:343] host is not running, skipping remaining checks
	I0816 18:14:57.816568  422436 status.go:257] multinode-126917 status: &{Name:multinode-126917 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0816 18:14:57.816598  422436 status.go:255] checking status of multinode-126917-m02 ...
	I0816 18:14:57.816925  422436 cli_runner.go:164] Run: docker container inspect multinode-126917-m02 --format={{.State.Status}}
	I0816 18:14:57.834472  422436 status.go:330] multinode-126917-m02 host status = "Stopped" (err=<nil>)
	I0816 18:14:57.834492  422436 status.go:343] host is not running, skipping remaining checks
	I0816 18:14:57.834499  422436 status.go:257] multinode-126917-m02 status: &{Name:multinode-126917-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (23.96s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (58.03s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-126917 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd
E0816 18:15:23.312226  293371 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-287979/.minikube/profiles/functional-700256/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:376: (dbg) Done: out/minikube-linux-arm64 start -p multinode-126917 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd: (57.396673018s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-arm64 -p multinode-126917 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (58.03s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (35.51s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-126917
multinode_test.go:464: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-126917-m02 --driver=docker  --container-runtime=containerd
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p multinode-126917-m02 --driver=docker  --container-runtime=containerd: exit status 14 (76.590717ms)

                                                
                                                
-- stdout --
	* [multinode-126917-m02] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19461
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19461-287979/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19461-287979/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-126917-m02' is duplicated with machine name 'multinode-126917-m02' in profile 'multinode-126917'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-126917-m03 --driver=docker  --container-runtime=containerd
E0816 18:16:13.788082  293371 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-287979/.minikube/profiles/addons-864899/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:472: (dbg) Done: out/minikube-linux-arm64 start -p multinode-126917-m03 --driver=docker  --container-runtime=containerd: (33.091454535s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-126917
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-arm64 node add -p multinode-126917: exit status 80 (319.008176ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-126917 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-126917-m03 already exists in multinode-126917-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-arm64 delete -p multinode-126917-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-arm64 delete -p multinode-126917-m03: (1.975969053s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (35.51s)

                                                
                                    
x
+
TestPreload (126.68s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-915103 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.24.4
E0816 18:16:46.374222  293371 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-287979/.minikube/profiles/functional-700256/client.crt: no such file or directory" logger="UnhandledError"
preload_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-915103 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.24.4: (1m18.596128876s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-915103 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-arm64 -p test-preload-915103 image pull gcr.io/k8s-minikube/busybox: (1.239815719s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-arm64 stop -p test-preload-915103
preload_test.go:58: (dbg) Done: out/minikube-linux-arm64 stop -p test-preload-915103: (12.067318244s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-915103 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=containerd
preload_test.go:66: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-915103 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=containerd: (32.25501101s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-915103 image list
helpers_test.go:175: Cleaning up "test-preload-915103" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p test-preload-915103
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p test-preload-915103: (2.291607332s)
--- PASS: TestPreload (126.68s)

                                                
                                    
x
+
TestScheduledStopUnix (105.63s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-arm64 start -p scheduled-stop-621666 --memory=2048 --driver=docker  --container-runtime=containerd
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-arm64 start -p scheduled-stop-621666 --memory=2048 --driver=docker  --container-runtime=containerd: (29.816860961s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-621666 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-arm64 status --format={{.TimeToStop}} -p scheduled-stop-621666 -n scheduled-stop-621666
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-621666 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-621666 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-621666 -n scheduled-stop-621666
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-621666
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-621666 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-621666
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p scheduled-stop-621666: exit status 7 (62.283928ms)

                                                
                                                
-- stdout --
	scheduled-stop-621666
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-621666 -n scheduled-stop-621666
E0816 18:20:23.311740  293371 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-287979/.minikube/profiles/functional-700256/client.crt: no such file or directory" logger="UnhandledError"
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-621666 -n scheduled-stop-621666: exit status 7 (68.003471ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-621666" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p scheduled-stop-621666
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p scheduled-stop-621666: (4.288743508s)
--- PASS: TestScheduledStopUnix (105.63s)

                                                
                                    
x
+
TestInsufficientStorage (12.92s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-arm64 start -p insufficient-storage-814768 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=containerd
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p insufficient-storage-814768 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=containerd: exit status 26 (10.481917959s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"f5fbb2e7-1c18-4fde-bdcc-1df3c994c59c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-814768] minikube v1.33.1 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"3b9f4bc7-653e-42dd-bdc3-e2005d42a7fa","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19461"}}
	{"specversion":"1.0","id":"b491f58c-2319-494b-b9af-3c19f63c0b06","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"378274d9-db75-4673-b709-95f7cd8402e6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19461-287979/kubeconfig"}}
	{"specversion":"1.0","id":"e04a5617-c42d-4989-8e01-1caef62a31c0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19461-287979/.minikube"}}
	{"specversion":"1.0","id":"21e2f6b0-46d7-417c-8347-1902258149a9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"712def83-8f21-409d-a22f-94686ed08c0a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"fda744ed-2764-4a4a-a5fa-8113234456e1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"efd6375a-d37b-4354-b4da-6f7522e03b55","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"da12f60e-77a6-419c-9b40-6669bcdf7a8b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"415ed724-bfb7-4c2b-8ee0-821f1ee391f0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"2e7081fd-2522-40b6-9876-7e0f316f1540","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-814768\" primary control-plane node in \"insufficient-storage-814768\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"72bee5d3-9b8f-4084-a78e-6483cf068a80","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.44-1723740748-19452 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"52d3a208-7741-4cf8-9aab-575e2b4c1e55","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"326fbea7-6984-4639-8784-b9b7e2db5f84","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-814768 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-814768 --output=json --layout=cluster: exit status 7 (276.515651ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-814768","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.33.1","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-814768","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0816 18:20:38.360443  441164 status.go:417] kubeconfig endpoint: get endpoint: "insufficient-storage-814768" does not appear in /home/jenkins/minikube-integration/19461-287979/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-814768 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-814768 --output=json --layout=cluster: exit status 7 (273.612889ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-814768","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.33.1","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-814768","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0816 18:20:38.634345  441227 status.go:417] kubeconfig endpoint: get endpoint: "insufficient-storage-814768" does not appear in /home/jenkins/minikube-integration/19461-287979/kubeconfig
	E0816 18:20:38.644339  441227 status.go:560] unable to read event log: stat: stat /home/jenkins/minikube-integration/19461-287979/.minikube/profiles/insufficient-storage-814768/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-814768" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p insufficient-storage-814768
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p insufficient-storage-814768: (1.890271202s)
--- PASS: TestInsufficientStorage (12.92s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (91.1s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.776942299 start -p running-upgrade-531657 --memory=2200 --vm-driver=docker  --container-runtime=containerd
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.776942299 start -p running-upgrade-531657 --memory=2200 --vm-driver=docker  --container-runtime=containerd: (51.44439888s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-arm64 start -p running-upgrade-531657 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-arm64 start -p running-upgrade-531657 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (35.387948948s)
helpers_test.go:175: Cleaning up "running-upgrade-531657" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p running-upgrade-531657
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p running-upgrade-531657: (3.503624543s)
--- PASS: TestRunningBinaryUpgrade (91.10s)

                                                
                                    
x
+
TestKubernetesUpgrade (100.47s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-139041 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-139041 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (56.730979159s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-arm64 stop -p kubernetes-upgrade-139041
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-arm64 stop -p kubernetes-upgrade-139041: (1.226741286s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-arm64 -p kubernetes-upgrade-139041 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-arm64 -p kubernetes-upgrade-139041 status --format={{.Host}}: exit status 7 (64.696208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-139041 --memory=2200 --kubernetes-version=v1.31.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-139041 --memory=2200 --kubernetes-version=v1.31.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (30.673521523s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-139041 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-139041 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p kubernetes-upgrade-139041 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=containerd: exit status 106 (121.672055ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-139041] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19461
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19461-287979/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19461-287979/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.31.0 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-139041
	    minikube start -p kubernetes-upgrade-139041 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-1390412 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.31.0, by running:
	    
	    minikube start -p kubernetes-upgrade-139041 --kubernetes-version=v1.31.0
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-139041 --memory=2200 --kubernetes-version=v1.31.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-139041 --memory=2200 --kubernetes-version=v1.31.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (8.643696799s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-139041" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubernetes-upgrade-139041
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p kubernetes-upgrade-139041: (2.826832183s)
--- PASS: TestKubernetesUpgrade (100.47s)

                                                
                                    
x
+
TestMissingContainerUpgrade (174.04s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.26.0.3394843208 start -p missing-upgrade-052934 --memory=2200 --driver=docker  --container-runtime=containerd
E0816 18:21:13.790673  293371 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-287979/.minikube/profiles/addons-864899/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.26.0.3394843208 start -p missing-upgrade-052934 --memory=2200 --driver=docker  --container-runtime=containerd: (1m25.031632754s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-052934
version_upgrade_test.go:318: (dbg) Done: docker stop missing-upgrade-052934: (10.300246938s)
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-052934
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-arm64 start -p missing-upgrade-052934 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-arm64 start -p missing-upgrade-052934 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (1m15.179789808s)
helpers_test.go:175: Cleaning up "missing-upgrade-052934" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p missing-upgrade-052934
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p missing-upgrade-052934: (2.684085937s)
--- PASS: TestMissingContainerUpgrade (174.04s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.07s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-480174 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p NoKubernetes-480174 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=containerd: exit status 14 (74.317981ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-480174] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19461
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19461-287979/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19461-287979/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.07s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (40.23s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-480174 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-480174 --driver=docker  --container-runtime=containerd: (39.537196155s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-480174 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (40.23s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (19.88s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-480174 --no-kubernetes --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-480174 --no-kubernetes --driver=docker  --container-runtime=containerd: (17.576081205s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-480174 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-arm64 -p NoKubernetes-480174 status -o json: exit status 2 (314.964892ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-480174","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-arm64 delete -p NoKubernetes-480174
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-arm64 delete -p NoKubernetes-480174: (1.992532771s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (19.88s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (6.47s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-480174 --no-kubernetes --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-480174 --no-kubernetes --driver=docker  --container-runtime=containerd: (6.469520184s)
--- PASS: TestNoKubernetes/serial/Start (6.47s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.31s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-480174 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-480174 "sudo systemctl is-active --quiet service kubelet": exit status 1 (309.478741ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.31s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.09s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-arm64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-arm64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.09s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.23s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-arm64 stop -p NoKubernetes-480174
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-arm64 stop -p NoKubernetes-480174: (1.2341758s)
--- PASS: TestNoKubernetes/serial/Stop (1.23s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (7.64s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-480174 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-480174 --driver=docker  --container-runtime=containerd: (7.637015053s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (7.64s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.33s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-480174 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-480174 "sudo systemctl is-active --quiet service kubelet": exit status 1 (331.279435ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.33s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.74s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.74s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (121.28s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.1971123302 start -p stopped-upgrade-439168 --memory=2200 --vm-driver=docker  --container-runtime=containerd
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.1971123302 start -p stopped-upgrade-439168 --memory=2200 --vm-driver=docker  --container-runtime=containerd: (52.206277241s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.1971123302 -p stopped-upgrade-439168 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.1971123302 -p stopped-upgrade-439168 stop: (20.002900105s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-arm64 start -p stopped-upgrade-439168 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-arm64 start -p stopped-upgrade-439168 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (49.065011477s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (121.28s)

                                                
                                    
x
+
TestPause/serial/Start (74.48s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -p pause-852854 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=containerd
E0816 18:25:23.311912  293371 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-287979/.minikube/profiles/functional-700256/client.crt: no such file or directory" logger="UnhandledError"
pause_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -p pause-852854 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=containerd: (1m14.483292695s)
--- PASS: TestPause/serial/Start (74.48s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.16s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-arm64 logs -p stopped-upgrade-439168
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-arm64 logs -p stopped-upgrade-439168: (1.161346719s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (3.9s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-arm64 start -p false-827639 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=containerd
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p false-827639 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=containerd: exit status 14 (211.738529ms)

                                                
                                                
-- stdout --
	* [false-827639] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19461
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19461-287979/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19461-287979/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0816 18:26:21.950635  476429 out.go:345] Setting OutFile to fd 1 ...
	I0816 18:26:21.950838  476429 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 18:26:21.950867  476429 out.go:358] Setting ErrFile to fd 2...
	I0816 18:26:21.950888  476429 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0816 18:26:21.951124  476429 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19461-287979/.minikube/bin
	I0816 18:26:21.951579  476429 out.go:352] Setting JSON to false
	I0816 18:26:21.952598  476429 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":7712,"bootTime":1723825070,"procs":212,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0816 18:26:21.952693  476429 start.go:139] virtualization:  
	I0816 18:26:21.960409  476429 out.go:177] * [false-827639] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	I0816 18:26:21.962572  476429 out.go:177]   - MINIKUBE_LOCATION=19461
	I0816 18:26:21.962700  476429 notify.go:220] Checking for updates...
	I0816 18:26:21.966715  476429 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0816 18:26:21.968938  476429 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19461-287979/kubeconfig
	I0816 18:26:21.970760  476429 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19461-287979/.minikube
	I0816 18:26:21.972713  476429 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0816 18:26:21.974678  476429 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0816 18:26:21.977397  476429 config.go:182] Loaded profile config "pause-852854": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.0
	I0816 18:26:21.977497  476429 driver.go:392] Setting default libvirt URI to qemu:///system
	I0816 18:26:22.016656  476429 docker.go:123] docker version: linux-27.1.2:Docker Engine - Community
	I0816 18:26:22.016875  476429 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0816 18:26:22.098213  476429 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:35 OomKillDisable:true NGoroutines:53 SystemTime:2024-08-16 18:26:22.088681062 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1067-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214900736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8fc6bcff51318944179630522a095cc9dbf9f353 Expected:8fc6bcff51318944179630522a095cc9dbf9f353} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1]] Warnings:<nil>}}
	I0816 18:26:22.098327  476429 docker.go:307] overlay module found
	I0816 18:26:22.101609  476429 out.go:177] * Using the docker driver based on user configuration
	I0816 18:26:22.103381  476429 start.go:297] selected driver: docker
	I0816 18:26:22.103403  476429 start.go:901] validating driver "docker" against <nil>
	I0816 18:26:22.103418  476429 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0816 18:26:22.105960  476429 out.go:201] 
	W0816 18:26:22.107747  476429 out.go:270] X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	I0816 18:26:22.109442  476429 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-827639 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-827639

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-827639

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-827639

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-827639

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-827639

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-827639

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-827639

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-827639

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-827639

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-827639

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-827639" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-827639"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-827639" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-827639"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-827639" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-827639"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-827639

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-827639" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-827639"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-827639" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-827639"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-827639" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-827639" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-827639" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-827639" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-827639" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-827639" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-827639" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-827639" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-827639" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-827639"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-827639" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-827639"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-827639" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-827639"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-827639" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-827639"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-827639" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-827639"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-827639" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-827639" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-827639" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-827639" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-827639"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-827639" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-827639"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-827639" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-827639"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-827639" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-827639"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-827639" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-827639"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/19461-287979/.minikube/ca.crt
extensions:
- extension:
last-update: Fri, 16 Aug 2024 18:25:55 UTC
provider: minikube.sigs.k8s.io
version: v1.33.1
name: cluster_info
server: https://192.168.76.2:8443
name: pause-852854
contexts:
- context:
cluster: pause-852854
extensions:
- extension:
last-update: Fri, 16 Aug 2024 18:25:55 UTC
provider: minikube.sigs.k8s.io
version: v1.33.1
name: context_info
namespace: default
user: pause-852854
name: pause-852854
current-context: ""
kind: Config
preferences: {}
users:
- name: pause-852854
user:
client-certificate: /home/jenkins/minikube-integration/19461-287979/.minikube/profiles/pause-852854/client.crt
client-key: /home/jenkins/minikube-integration/19461-287979/.minikube/profiles/pause-852854/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-827639

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-827639" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-827639"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-827639" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-827639"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-827639" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-827639"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-827639" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-827639"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-827639" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-827639"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-827639" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-827639"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-827639" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-827639"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-827639" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-827639"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-827639" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-827639"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-827639" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-827639"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-827639" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-827639"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-827639" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-827639"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-827639" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-827639"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-827639" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-827639"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-827639" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-827639"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-827639" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-827639"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-827639" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-827639"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-827639" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-827639"

                                                
                                                
----------------------- debugLogs end: false-827639 [took: 3.466205328s] --------------------------------
helpers_test.go:175: Cleaning up "false-827639" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p false-827639
--- PASS: TestNetworkPlugins/group/false (3.90s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (7.27s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-arm64 start -p pause-852854 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
pause_test.go:92: (dbg) Done: out/minikube-linux-arm64 start -p pause-852854 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (7.248232218s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (7.27s)

                                                
                                    
x
+
TestPause/serial/Pause (0.93s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-852854 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.93s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.41s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p pause-852854 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p pause-852854 --output=json --layout=cluster: exit status 2 (413.28437ms)

                                                
                                                
-- stdout --
	{"Name":"pause-852854","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 8 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.33.1","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-852854","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.41s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.97s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-arm64 unpause -p pause-852854 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.97s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (1.02s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-852854 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-linux-arm64 pause -p pause-852854 --alsologtostderr -v=5: (1.022338886s)
--- PASS: TestPause/serial/PauseAgain (1.02s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (3.1s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p pause-852854 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p pause-852854 --alsologtostderr -v=5: (3.096147294s)
--- PASS: TestPause/serial/DeletePaused (3.10s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.3s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-852854
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-852854: exit status 1 (14.784786ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-852854: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (0.30s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (155.62s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-686713 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.20.0
E0816 18:29:16.856721  293371 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-287979/.minikube/profiles/addons-864899/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-686713 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.20.0: (2m35.623698651s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (155.62s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (8.02s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-686713 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [ed286a66-5072-4485-a24d-5980a287f31e] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [ed286a66-5072-4485-a24d-5980a287f31e] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 7.055051178s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-686713 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (8.02s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (75.7s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-691813 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-691813 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.0: (1m15.701049211s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (75.70s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.74s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-686713 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-686713 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.517700655s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-686713 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.74s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (13.45s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p old-k8s-version-686713 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p old-k8s-version-686713 --alsologtostderr -v=3: (13.453674591s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (13.45s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-686713 -n old-k8s-version-686713
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-686713 -n old-k8s-version-686713: exit status 7 (71.281183ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p old-k8s-version-686713 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (10.46s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-691813 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [322903ce-2a3b-48b3-8da6-91e2f704be82] Pending
helpers_test.go:344: "busybox" [322903ce-2a3b-48b3-8da6-91e2f704be82] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [322903ce-2a3b-48b3-8da6-91e2f704be82] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 10.01171418s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-691813 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (10.46s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.19s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p no-preload-691813 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p no-preload-691813 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.041771717s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-691813 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.19s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (12.12s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p no-preload-691813 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p no-preload-691813 --alsologtostderr -v=3: (12.119586993s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (12.12s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-691813 -n no-preload-691813
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-691813 -n no-preload-691813: exit status 7 (68.735793ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p no-preload-691813 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (267.17s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-691813 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.0
E0816 18:33:26.375689  293371 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-287979/.minikube/profiles/functional-700256/client.crt: no such file or directory" logger="UnhandledError"
E0816 18:35:23.312240  293371 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-287979/.minikube/profiles/functional-700256/client.crt: no such file or directory" logger="UnhandledError"
E0816 18:36:13.787878  293371 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-287979/.minikube/profiles/addons-864899/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-691813 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.0: (4m26.723295468s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-691813 -n no-preload-691813
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (267.17s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-x5w47" [d7d96685-861f-4a02-9a6c-3a3854d2b0e7] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004369715s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.13s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-x5w47" [d7d96685-861f-4a02-9a6c-3a3854d2b0e7] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.00390786s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-691813 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.13s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.34s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-691813 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240813-c6f155d6
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.34s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (3.29s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p no-preload-691813 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-arm64 pause -p no-preload-691813 --alsologtostderr -v=1: (1.084477476s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-691813 -n no-preload-691813
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-691813 -n no-preload-691813: exit status 2 (326.942159ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-691813 -n no-preload-691813
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-691813 -n no-preload-691813: exit status 2 (325.38831ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p no-preload-691813 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-691813 -n no-preload-691813
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-691813 -n no-preload-691813
--- PASS: TestStartStop/group/no-preload/serial/Pause (3.29s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (66.54s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-403200 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-403200 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.0: (1m6.541103809s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (66.54s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-d9dhg" [a0a3f5b7-d413-45c2-a4be-8cc49ac94e67] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004710675s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (6.12s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-d9dhg" [a0a3f5b7-d413-45c2-a4be-8cc49ac94e67] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004779158s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-686713 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (6.12s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.38s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-686713 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240202-8f1494ea
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240813-c6f155d6
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.38s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (3.8s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p old-k8s-version-686713 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-686713 -n old-k8s-version-686713
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-686713 -n old-k8s-version-686713: exit status 2 (342.007711ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-686713 -n old-k8s-version-686713
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-686713 -n old-k8s-version-686713: exit status 2 (438.761712ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p old-k8s-version-686713 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-arm64 unpause -p old-k8s-version-686713 --alsologtostderr -v=1: (1.092355463s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-686713 -n old-k8s-version-686713
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-686713 -n old-k8s-version-686713
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (3.80s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (52.88s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-168814 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-168814 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.0: (52.883607707s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (52.88s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (9.45s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-403200 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [86748222-0c59-41cb-84d2-a5ac6ce75550] Pending
helpers_test.go:344: "busybox" [86748222-0c59-41cb-84d2-a5ac6ce75550] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [86748222-0c59-41cb-84d2-a5ac6ce75550] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 9.003443196s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-403200 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (9.45s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.09s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-403200 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-403200 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.09s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (12.04s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p embed-certs-403200 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p embed-certs-403200 --alsologtostderr -v=3: (12.043141813s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (12.04s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.37s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-168814 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [3e0c8dd3-b6a0-4a57-8527-024978aafa51] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [3e0c8dd3-b6a0-4a57-8527-024978aafa51] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 9.004454938s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-168814 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.37s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-403200 -n embed-certs-403200
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-403200 -n embed-certs-403200: exit status 7 (71.657713ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p embed-certs-403200 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (298.61s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-403200 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.0
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-403200 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.0: (4m58.138749096s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-403200 -n embed-certs-403200
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (298.61s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.04s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-168814 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-168814 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.04s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (12.23s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p default-k8s-diff-port-168814 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p default-k8s-diff-port-168814 --alsologtostderr -v=3: (12.23417403s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (12.23s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.31s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-168814 -n default-k8s-diff-port-168814
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-168814 -n default-k8s-diff-port-168814: exit status 7 (131.198645ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p default-k8s-diff-port-168814 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.31s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (269.71s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-168814 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.0
E0816 18:40:23.311914  293371 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-287979/.minikube/profiles/functional-700256/client.crt: no such file or directory" logger="UnhandledError"
E0816 18:40:24.829557  293371 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-287979/.minikube/profiles/old-k8s-version-686713/client.crt: no such file or directory" logger="UnhandledError"
E0816 18:40:24.836012  293371 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-287979/.minikube/profiles/old-k8s-version-686713/client.crt: no such file or directory" logger="UnhandledError"
E0816 18:40:24.847568  293371 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-287979/.minikube/profiles/old-k8s-version-686713/client.crt: no such file or directory" logger="UnhandledError"
E0816 18:40:24.868962  293371 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-287979/.minikube/profiles/old-k8s-version-686713/client.crt: no such file or directory" logger="UnhandledError"
E0816 18:40:24.910392  293371 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-287979/.minikube/profiles/old-k8s-version-686713/client.crt: no such file or directory" logger="UnhandledError"
E0816 18:40:24.991891  293371 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-287979/.minikube/profiles/old-k8s-version-686713/client.crt: no such file or directory" logger="UnhandledError"
E0816 18:40:25.153417  293371 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-287979/.minikube/profiles/old-k8s-version-686713/client.crt: no such file or directory" logger="UnhandledError"
E0816 18:40:25.475231  293371 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-287979/.minikube/profiles/old-k8s-version-686713/client.crt: no such file or directory" logger="UnhandledError"
E0816 18:40:26.117482  293371 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-287979/.minikube/profiles/old-k8s-version-686713/client.crt: no such file or directory" logger="UnhandledError"
E0816 18:40:27.398955  293371 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-287979/.minikube/profiles/old-k8s-version-686713/client.crt: no such file or directory" logger="UnhandledError"
E0816 18:40:29.960301  293371 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-287979/.minikube/profiles/old-k8s-version-686713/client.crt: no such file or directory" logger="UnhandledError"
E0816 18:40:35.082209  293371 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-287979/.minikube/profiles/old-k8s-version-686713/client.crt: no such file or directory" logger="UnhandledError"
E0816 18:40:45.323901  293371 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-287979/.minikube/profiles/old-k8s-version-686713/client.crt: no such file or directory" logger="UnhandledError"
E0816 18:41:05.805682  293371 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-287979/.minikube/profiles/old-k8s-version-686713/client.crt: no such file or directory" logger="UnhandledError"
E0816 18:41:13.788408  293371 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-287979/.minikube/profiles/addons-864899/client.crt: no such file or directory" logger="UnhandledError"
E0816 18:41:45.315166  293371 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-287979/.minikube/profiles/no-preload-691813/client.crt: no such file or directory" logger="UnhandledError"
E0816 18:41:45.321669  293371 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-287979/.minikube/profiles/no-preload-691813/client.crt: no such file or directory" logger="UnhandledError"
E0816 18:41:45.333180  293371 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-287979/.minikube/profiles/no-preload-691813/client.crt: no such file or directory" logger="UnhandledError"
E0816 18:41:45.354992  293371 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-287979/.minikube/profiles/no-preload-691813/client.crt: no such file or directory" logger="UnhandledError"
E0816 18:41:45.396634  293371 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-287979/.minikube/profiles/no-preload-691813/client.crt: no such file or directory" logger="UnhandledError"
E0816 18:41:45.478432  293371 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-287979/.minikube/profiles/no-preload-691813/client.crt: no such file or directory" logger="UnhandledError"
E0816 18:41:45.640150  293371 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-287979/.minikube/profiles/no-preload-691813/client.crt: no such file or directory" logger="UnhandledError"
E0816 18:41:45.961854  293371 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-287979/.minikube/profiles/no-preload-691813/client.crt: no such file or directory" logger="UnhandledError"
E0816 18:41:46.603327  293371 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-287979/.minikube/profiles/no-preload-691813/client.crt: no such file or directory" logger="UnhandledError"
E0816 18:41:46.767948  293371 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-287979/.minikube/profiles/old-k8s-version-686713/client.crt: no such file or directory" logger="UnhandledError"
E0816 18:41:47.884699  293371 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-287979/.minikube/profiles/no-preload-691813/client.crt: no such file or directory" logger="UnhandledError"
E0816 18:41:50.446100  293371 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-287979/.minikube/profiles/no-preload-691813/client.crt: no such file or directory" logger="UnhandledError"
E0816 18:41:55.568320  293371 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-287979/.minikube/profiles/no-preload-691813/client.crt: no such file or directory" logger="UnhandledError"
E0816 18:42:05.809715  293371 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-287979/.minikube/profiles/no-preload-691813/client.crt: no such file or directory" logger="UnhandledError"
E0816 18:42:26.290985  293371 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-287979/.minikube/profiles/no-preload-691813/client.crt: no such file or directory" logger="UnhandledError"
E0816 18:43:07.253098  293371 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-287979/.minikube/profiles/no-preload-691813/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-168814 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.0: (4m29.360717527s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-168814 -n default-k8s-diff-port-168814
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (269.71s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-6sgbp" [25485140-8743-41d2-b502-783e0bf823e3] Running
E0816 18:43:08.689956  293371 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-287979/.minikube/profiles/old-k8s-version-686713/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004111968s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.11s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-6sgbp" [25485140-8743-41d2-b502-783e0bf823e3] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003636919s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-168814 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.11s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-168814 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240730-75a5af0c
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240813-c6f155d6
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.23s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (3.27s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p default-k8s-diff-port-168814 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-168814 -n default-k8s-diff-port-168814
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-168814 -n default-k8s-diff-port-168814: exit status 2 (336.418354ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-168814 -n default-k8s-diff-port-168814
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-168814 -n default-k8s-diff-port-168814: exit status 2 (412.768363ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p default-k8s-diff-port-168814 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-168814 -n default-k8s-diff-port-168814
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-168814 -n default-k8s-diff-port-168814
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (3.27s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-wz4tw" [53a9cd59-901d-43f5-b302-77cc3f00dd02] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004549368s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (50.05s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-191960 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-191960 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.0: (50.048892225s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (50.05s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.15s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-wz4tw" [53a9cd59-901d-43f5-b302-77cc3f00dd02] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004453732s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-403200 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.15s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.29s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-403200 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240813-c6f155d6
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240730-75a5af0c
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.29s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (3.14s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p embed-certs-403200 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-403200 -n embed-certs-403200
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-403200 -n embed-certs-403200: exit status 2 (315.567972ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-403200 -n embed-certs-403200
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-403200 -n embed-certs-403200: exit status 2 (309.513311ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p embed-certs-403200 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-403200 -n embed-certs-403200
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-403200 -n embed-certs-403200
--- PASS: TestStartStop/group/embed-certs/serial/Pause (3.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (66.66s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p auto-827639 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p auto-827639 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=containerd: (1m6.661697124s)
--- PASS: TestNetworkPlugins/group/auto/Start (66.66s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (2.02s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-191960 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-191960 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (2.019680577s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (2.02s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (1.37s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p newest-cni-191960 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p newest-cni-191960 --alsologtostderr -v=3: (1.374739125s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (1.37s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.35s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-191960 -n newest-cni-191960
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-191960 -n newest-cni-191960: exit status 7 (170.474493ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p newest-cni-191960 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.35s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (17.1s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-191960 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.0
E0816 18:44:29.175324  293371 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-287979/.minikube/profiles/no-preload-691813/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-191960 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.0: (16.749423965s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-191960 -n newest-cni-191960
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (17.10s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.31s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-191960 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240813-c6f155d6
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240730-75a5af0c
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.31s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (3.02s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p newest-cni-191960 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-191960 -n newest-cni-191960
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-191960 -n newest-cni-191960: exit status 2 (323.171531ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-191960 -n newest-cni-191960
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-191960 -n newest-cni-191960: exit status 2 (313.696983ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p newest-cni-191960 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-191960 -n newest-cni-191960
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-191960 -n newest-cni-191960
--- PASS: TestStartStop/group/newest-cni/serial/Pause (3.02s)
E0816 18:49:47.166932  293371 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-287979/.minikube/profiles/auto-827639/client.crt: no such file or directory" logger="UnhandledError"
E0816 18:49:47.173332  293371 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-287979/.minikube/profiles/auto-827639/client.crt: no such file or directory" logger="UnhandledError"
E0816 18:49:47.184692  293371 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-287979/.minikube/profiles/auto-827639/client.crt: no such file or directory" logger="UnhandledError"
E0816 18:49:47.206098  293371 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-287979/.minikube/profiles/auto-827639/client.crt: no such file or directory" logger="UnhandledError"
E0816 18:49:47.247479  293371 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-287979/.minikube/profiles/auto-827639/client.crt: no such file or directory" logger="UnhandledError"
E0816 18:49:47.328939  293371 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-287979/.minikube/profiles/auto-827639/client.crt: no such file or directory" logger="UnhandledError"
E0816 18:49:47.490418  293371 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-287979/.minikube/profiles/auto-827639/client.crt: no such file or directory" logger="UnhandledError"
E0816 18:49:47.812122  293371 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-287979/.minikube/profiles/auto-827639/client.crt: no such file or directory" logger="UnhandledError"
E0816 18:49:48.454408  293371 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-287979/.minikube/profiles/auto-827639/client.crt: no such file or directory" logger="UnhandledError"
E0816 18:49:49.735853  293371 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-287979/.minikube/profiles/auto-827639/client.crt: no such file or directory" logger="UnhandledError"
E0816 18:49:52.297921  293371 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-287979/.minikube/profiles/auto-827639/client.crt: no such file or directory" logger="UnhandledError"

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (61.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p flannel-827639 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p flannel-827639 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=containerd: (1m1.116213207s)
--- PASS: TestNetworkPlugins/group/flannel/Start (61.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p auto-827639 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (9.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-827639 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-j94br" [9f4932f7-9054-418d-9759-8073e7430a88] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-j94br" [9f4932f7-9054-418d-9759-8073e7430a88] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 9.003871776s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (9.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-827639 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-827639 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-827639 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (64.66s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p calico-827639 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=containerd
E0816 18:45:23.311850  293371 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-287979/.minikube/profiles/functional-700256/client.crt: no such file or directory" logger="UnhandledError"
E0816 18:45:24.834437  293371 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-287979/.minikube/profiles/old-k8s-version-686713/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p calico-827639 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=containerd: (1m4.663782966s)
--- PASS: TestNetworkPlugins/group/calico/Start (64.66s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-ntsvm" [d5f5c8fc-e89e-4e5f-b5e6-e72cf76ac962] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.003940867s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p flannel-827639 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (9.46s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-827639 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-55dvh" [26bd9c7d-673d-40e5-98cb-f51c6c95480f] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0816 18:45:52.531814  293371 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-287979/.minikube/profiles/old-k8s-version-686713/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "netcat-6fc964789b-55dvh" [26bd9c7d-673d-40e5-98cb-f51c6c95480f] Running
E0816 18:45:56.858688  293371 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-287979/.minikube/profiles/addons-864899/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 9.004693198s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (9.46s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-827639 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.44s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-827639 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.44s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-827639 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (58.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-flannel-827639 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-flannel-827639 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=containerd: (58.107368124s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (58.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-hcrp8" [cfc49fa1-0410-49e0-92cb-78c302b76544] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.004991736s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p calico-827639 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (10.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-827639 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-8d2x4" [e0876f63-7975-4975-aad0-b3be873254ca] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-8d2x4" [e0876f63-7975-4975-aad0-b3be873254ca] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 10.005102669s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (10.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-827639 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-827639 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-827639 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (60.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p kindnet-827639 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=containerd
E0816 18:47:13.017127  293371 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-287979/.minikube/profiles/no-preload-691813/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p kindnet-827639 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=containerd: (1m0.254721451s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (60.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p custom-flannel-827639 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (9.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-827639 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-z87rs" [c3b40b8c-f9a0-4e83-a61a-c782376f3fc2] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-z87rs" [c3b40b8c-f9a0-4e83-a61a-c782376f3fc2] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 9.00439558s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (9.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-827639 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-827639 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-827639 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (77.71s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p bridge-827639 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p bridge-827639 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=containerd: (1m17.709123833s)
--- PASS: TestNetworkPlugins/group/bridge/Start (77.71s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-2n5zp" [e32fd1f9-82f4-473d-af40-83005e6d3def] Running
E0816 18:48:15.982658  293371 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-287979/.minikube/profiles/default-k8s-diff-port-168814/client.crt: no such file or directory" logger="UnhandledError"
E0816 18:48:15.988915  293371 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-287979/.minikube/profiles/default-k8s-diff-port-168814/client.crt: no such file or directory" logger="UnhandledError"
E0816 18:48:16.000268  293371 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-287979/.minikube/profiles/default-k8s-diff-port-168814/client.crt: no such file or directory" logger="UnhandledError"
E0816 18:48:16.021647  293371 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-287979/.minikube/profiles/default-k8s-diff-port-168814/client.crt: no such file or directory" logger="UnhandledError"
E0816 18:48:16.063002  293371 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-287979/.minikube/profiles/default-k8s-diff-port-168814/client.crt: no such file or directory" logger="UnhandledError"
E0816 18:48:16.144333  293371 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-287979/.minikube/profiles/default-k8s-diff-port-168814/client.crt: no such file or directory" logger="UnhandledError"
E0816 18:48:16.306157  293371 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-287979/.minikube/profiles/default-k8s-diff-port-168814/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.004453681s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p kindnet-827639 "pgrep -a kubelet"
E0816 18:48:16.627859  293371 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-287979/.minikube/profiles/default-k8s-diff-port-168814/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (9.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-827639 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-5dfm4" [0c2fcccd-9b30-4bab-b6f3-430002f5f31d] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0816 18:48:17.270046  293371 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-287979/.minikube/profiles/default-k8s-diff-port-168814/client.crt: no such file or directory" logger="UnhandledError"
E0816 18:48:18.551577  293371 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-287979/.minikube/profiles/default-k8s-diff-port-168814/client.crt: no such file or directory" logger="UnhandledError"
E0816 18:48:21.112882  293371 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-287979/.minikube/profiles/default-k8s-diff-port-168814/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "netcat-6fc964789b-5dfm4" [0c2fcccd-9b30-4bab-b6f3-430002f5f31d] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 9.008489817s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (9.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-827639 exec deployment/netcat -- nslookup kubernetes.default
E0816 18:48:26.234899  293371 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-287979/.minikube/profiles/default-k8s-diff-port-168814/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-827639 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-827639 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (39.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p enable-default-cni-827639 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=containerd
E0816 18:48:56.959434  293371 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-287979/.minikube/profiles/default-k8s-diff-port-168814/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p enable-default-cni-827639 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=containerd: (39.132560498s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (39.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p bridge-827639 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (11.45s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-827639 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-5g55x" [84725c16-a26d-4fb1-bdff-b09b5aabd537] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-5g55x" [84725c16-a26d-4fb1-bdff-b09b5aabd537] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 11.004394132s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (11.45s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-827639 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-827639 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-827639 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p enable-default-cni-827639 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-827639 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-z2f85" [9a72d767-1c84-4f2c-a91f-22965c1de85d] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-z2f85" [9a72d767-1c84-4f2c-a91f-22965c1de85d] Running
E0816 18:49:37.921421  293371 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-287979/.minikube/profiles/default-k8s-diff-port-168814/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 10.004561312s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (26.43s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-827639 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:175: (dbg) Non-zero exit: kubectl --context enable-default-cni-827639 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.248962701s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-827639 exec deployment/netcat -- nslookup kubernetes.default
E0816 18:49:57.419295  293371 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19461-287979/.minikube/profiles/auto-827639/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:175: (dbg) Done: kubectl --context enable-default-cni-827639 exec deployment/netcat -- nslookup kubernetes.default: (10.167448322s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (26.43s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-827639 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-827639 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.15s)

                                                
                                    

Test skip (28/328)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.31.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.57s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:232: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p download-docker-792554 --alsologtostderr --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:244: Skip for arm64 platform. See https://github.com/kubernetes/minikube/issues/10144
helpers_test.go:175: Cleaning up "download-docker-792554" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p download-docker-792554
--- SKIP: TestDownloadOnlyKic (0.57s)

                                                
                                    
x
+
TestOffline (0s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:35: skipping TestOffline - only docker runtime supported on arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestOffline (0.00s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (0s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:446: skip Helm test on arm64
--- SKIP: TestAddons/parallel/HelmTiller (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:500: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing containerd
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:45: Skip if arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1787: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:463: only validate docker env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:550: only validate podman env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing containerd container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-270244" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p disable-driver-mounts-270244
--- SKIP: TestStartStop/group/disable-driver-mounts (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3.44s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as containerd container runtimes requires CNI
panic.go:626: 
----------------------- debugLogs start: kubenet-827639 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-827639

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-827639

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-827639

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-827639

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-827639

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-827639

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-827639

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-827639

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-827639

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-827639

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-827639" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-827639"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-827639" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-827639"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-827639" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-827639"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-827639

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-827639" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-827639"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-827639" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-827639"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-827639" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-827639" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-827639" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-827639" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-827639" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-827639" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-827639" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-827639" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-827639" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-827639"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-827639" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-827639"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-827639" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-827639"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-827639" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-827639"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-827639" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-827639"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-827639" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-827639" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-827639" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-827639" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-827639"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-827639" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-827639"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-827639" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-827639"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-827639" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-827639"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-827639" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-827639"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/19461-287979/.minikube/ca.crt
extensions:
- extension:
last-update: Fri, 16 Aug 2024 18:25:55 UTC
provider: minikube.sigs.k8s.io
version: v1.33.1
name: cluster_info
server: https://192.168.76.2:8443
name: pause-852854
contexts:
- context:
cluster: pause-852854
extensions:
- extension:
last-update: Fri, 16 Aug 2024 18:25:55 UTC
provider: minikube.sigs.k8s.io
version: v1.33.1
name: context_info
namespace: default
user: pause-852854
name: pause-852854
current-context: ""
kind: Config
preferences: {}
users:
- name: pause-852854
user:
client-certificate: /home/jenkins/minikube-integration/19461-287979/.minikube/profiles/pause-852854/client.crt
client-key: /home/jenkins/minikube-integration/19461-287979/.minikube/profiles/pause-852854/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-827639

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-827639" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-827639"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-827639" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-827639"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-827639" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-827639"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-827639" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-827639"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-827639" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-827639"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-827639" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-827639"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-827639" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-827639"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-827639" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-827639"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-827639" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-827639"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-827639" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-827639"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-827639" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-827639"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-827639" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-827639"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-827639" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-827639"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-827639" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-827639"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-827639" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-827639"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-827639" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-827639"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-827639" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-827639"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-827639" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-827639"

                                                
                                                
----------------------- debugLogs end: kubenet-827639 [took: 3.269752763s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-827639" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubenet-827639
--- SKIP: TestNetworkPlugins/group/kubenet (3.44s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (5.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:626: 
----------------------- debugLogs start: cilium-827639 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-827639

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-827639

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-827639

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-827639

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-827639

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-827639

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-827639

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-827639

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-827639

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-827639

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-827639" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-827639"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-827639" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-827639"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-827639" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-827639"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-827639

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-827639" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-827639"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-827639" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-827639"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-827639" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-827639" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-827639" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-827639" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-827639" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-827639" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-827639" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-827639" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-827639" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-827639"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-827639" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-827639"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-827639" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-827639"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-827639" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-827639"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-827639" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-827639"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-827639

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-827639

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-827639" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-827639" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-827639

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-827639

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-827639" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-827639" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-827639" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-827639" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-827639" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-827639" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-827639"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-827639" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-827639"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-827639" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-827639"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-827639" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-827639"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-827639" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-827639"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/19461-287979/.minikube/ca.crt
extensions:
- extension:
last-update: Fri, 16 Aug 2024 18:25:55 UTC
provider: minikube.sigs.k8s.io
version: v1.33.1
name: cluster_info
server: https://192.168.76.2:8443
name: pause-852854
contexts:
- context:
cluster: pause-852854
extensions:
- extension:
last-update: Fri, 16 Aug 2024 18:25:55 UTC
provider: minikube.sigs.k8s.io
version: v1.33.1
name: context_info
namespace: default
user: pause-852854
name: pause-852854
current-context: ""
kind: Config
preferences: {}
users:
- name: pause-852854
user:
client-certificate: /home/jenkins/minikube-integration/19461-287979/.minikube/profiles/pause-852854/client.crt
client-key: /home/jenkins/minikube-integration/19461-287979/.minikube/profiles/pause-852854/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-827639

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-827639" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-827639"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-827639" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-827639"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-827639" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-827639"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-827639" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-827639"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-827639" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-827639"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-827639" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-827639"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-827639" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-827639"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-827639" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-827639"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-827639" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-827639"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-827639" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-827639"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-827639" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-827639"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-827639" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-827639"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-827639" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-827639"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-827639" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-827639"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-827639" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-827639"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-827639" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-827639"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-827639" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-827639"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-827639" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-827639"

                                                
                                                
----------------------- debugLogs end: cilium-827639 [took: 4.827697137s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-827639" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cilium-827639
--- SKIP: TestNetworkPlugins/group/cilium (5.02s)

                                                
                                    
Copied to clipboard